pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
masa-research/example-custom-tokenizer
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-16T09:53:00+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_2-seqsight_16384_512_22M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_mouse_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.5694 - F1 Score: 0.7986 - Accuracy: 0.7988 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.4155 | 100.0 | 200 | 0.6444 | 0.7773 | 0.7774 | | 0.1545 | 200.0 | 400 | 0.9031 | 0.7678 | 0.7683 | | 0.077 | 300.0 | 600 | 1.0179 | 0.75 | 0.75 | | 0.0492 | 400.0 | 800 | 1.2053 | 0.7369 | 0.7378 | | 0.035 | 500.0 | 1000 | 1.2201 | 0.7591 | 0.7591 | | 0.0288 | 600.0 | 1200 | 1.3462 | 0.7404 | 0.7409 | | 0.0232 | 700.0 | 1400 | 1.4383 | 0.7529 | 0.7530 | | 0.0195 | 800.0 | 1600 | 1.5447 | 0.7316 | 0.7317 | | 0.0157 | 900.0 | 1800 | 1.6063 | 0.7347 | 0.7348 | | 0.0138 | 1000.0 | 2000 | 1.6943 | 0.7256 | 0.7256 | | 0.0117 | 1100.0 | 2200 | 1.8375 | 0.7347 | 0.7348 | | 0.0112 | 1200.0 | 2400 | 1.7178 | 0.7317 | 0.7317 | | 0.0095 | 1300.0 | 2600 | 1.9442 | 0.7467 | 0.7470 | | 0.0102 | 1400.0 | 2800 | 1.7055 | 0.7469 | 0.7470 | | 0.0086 | 1500.0 | 3000 | 1.7884 | 0.7378 | 0.7378 | | 0.0086 | 1600.0 | 3200 | 1.7413 | 0.7408 | 0.7409 | | 0.0082 | 1700.0 | 3400 | 1.8514 | 0.7430 | 0.7439 | | 0.0075 | 1800.0 | 3600 | 2.0347 | 0.7462 | 0.7470 | | 0.0069 | 1900.0 | 3800 | 2.0254 | 0.7560 | 0.7561 | | 0.007 | 2000.0 | 4000 | 1.9290 | 0.7498 | 0.75 | | 0.0072 | 2100.0 | 4200 | 1.9003 | 0.7347 | 0.7348 | | 0.0064 | 2200.0 | 4400 | 1.8161 | 0.7529 | 0.7530 | | 0.0057 | 2300.0 | 4600 | 1.8749 | 0.7469 | 0.7470 | | 0.0059 | 2400.0 | 4800 | 2.0672 | 0.7621 | 0.7622 | | 0.0052 | 2500.0 | 5000 | 1.9596 | 0.7652 | 0.7652 | | 0.0049 | 2600.0 | 5200 | 1.9310 | 0.7591 | 0.7591 | | 0.0052 | 2700.0 | 5400 | 2.0119 | 0.7531 | 0.7530 | | 0.0052 | 2800.0 | 5600 | 2.0142 | 0.7530 | 0.7530 | | 0.0045 | 2900.0 | 5800 | 2.1720 | 0.7496 | 0.75 | | 0.0051 | 3000.0 | 6000 | 2.0024 | 0.7591 | 0.7591 | | 0.0045 | 3100.0 | 6200 | 2.1561 | 0.7560 | 0.7561 | | 0.0039 | 3200.0 | 6400 | 2.0863 | 0.7528 | 0.7530 | | 0.0043 | 3300.0 | 6600 | 1.8411 | 0.7409 | 0.7409 | | 0.0039 | 3400.0 | 6800 | 2.0618 | 0.7559 | 0.7561 | | 0.004 | 3500.0 | 7000 | 2.1323 | 0.7591 | 0.7591 | | 0.0034 | 3600.0 | 7200 | 2.1510 | 0.7497 | 0.75 | | 0.0035 | 3700.0 | 7400 | 2.1193 | 0.7470 | 0.7470 | | 0.0032 | 3800.0 | 7600 | 2.1593 | 0.7409 | 0.7409 | | 0.0035 | 3900.0 | 7800 | 2.0658 | 0.7408 | 0.7409 | | 0.0031 | 4000.0 | 8000 | 2.2867 | 0.7530 | 0.7530 | | 0.0032 | 4100.0 | 8200 | 2.2120 | 0.7620 | 0.7622 | | 0.0031 | 4200.0 | 8400 | 2.0866 | 0.7561 | 0.7561 | | 0.003 | 4300.0 | 8600 | 2.1883 | 0.7439 | 0.7439 | | 0.0032 | 4400.0 | 8800 | 2.1110 | 0.7530 | 0.7530 | | 0.0026 | 4500.0 | 9000 | 2.2802 | 0.7591 | 0.7591 | | 0.0029 | 4600.0 | 9200 | 2.2135 | 0.7591 | 0.7591 | | 0.0027 | 4700.0 | 9400 | 2.2237 | 0.7591 | 0.7591 | | 0.0029 | 4800.0 | 9600 | 2.1891 | 0.7591 | 0.7591 | | 0.0028 | 4900.0 | 9800 | 2.2118 | 0.7530 | 0.7530 | | 0.0025 | 5000.0 | 10000 | 2.2321 | 0.7591 | 0.7591 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_mouse_2-seqsight_16384_512_22M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_2-seqsight_16384_512_22M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-16T09:53:14+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_mouse\_2-seqsight\_16384\_512\_22M-L32\_all ================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_mouse\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.5694 * F1 Score: 0.7986 * Accuracy: 0.7988 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Noodlz/IvanDrogo-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/IvanDrogo-7B-GGUF/resolve/main/IvanDrogo-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/IvanDrogo-7B-GGUF/resolve/main/IvanDrogo-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/IvanDrogo-7B-GGUF/resolve/main/IvanDrogo-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/IvanDrogo-7B-GGUF/resolve/main/IvanDrogo-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/IvanDrogo-7B-GGUF/resolve/main/IvanDrogo-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/IvanDrogo-7B-GGUF/resolve/main/IvanDrogo-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/IvanDrogo-7B-GGUF/resolve/main/IvanDrogo-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/IvanDrogo-7B-GGUF/resolve/main/IvanDrogo-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/IvanDrogo-7B-GGUF/resolve/main/IvanDrogo-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/IvanDrogo-7B-GGUF/resolve/main/IvanDrogo-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/IvanDrogo-7B-GGUF/resolve/main/IvanDrogo-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/IvanDrogo-7B-GGUF/resolve/main/IvanDrogo-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/IvanDrogo-7B-GGUF/resolve/main/IvanDrogo-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/IvanDrogo-7B-GGUF/resolve/main/IvanDrogo-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "Noodlz/IvanDrogo-7B", "quantized_by": "mradermacher"}
mradermacher/IvanDrogo-7B-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Noodlz/IvanDrogo-7B", "endpoints_compatible", "region:us" ]
null
2024-04-16T09:55:31+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mergekit #merge #en #base_model-Noodlz/IvanDrogo-7B #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #mergekit #merge #en #base_model-Noodlz/IvanDrogo-7B #endpoints_compatible #region-us \n" ]
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PolizzeDonut-CR-Cluster5di7-SenzaPre-3Epochs This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-CR-Cluster5di7-SenzaPre-3Epochs", "results": []}]}
tedad09/PolizzeDonut-CR-Cluster5di7-SenzaPre-3Epochs
null
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-16T09:55:59+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
# PolizzeDonut-CR-Cluster5di7-SenzaPre-3Epochs This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# PolizzeDonut-CR-Cluster5di7-SenzaPre-3Epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n", "# PolizzeDonut-CR-Cluster5di7-SenzaPre-3Epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_splice_reconstructed-seqsight_16384_512_22M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_splice_reconstructed](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_splice_reconstructed) dataset. It achieves the following results on the evaluation set: - Loss: 0.7807 - F1 Score: 0.6778 - Accuracy: 0.6835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1536 - eval_batch_size: 1536 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.9558 | 8.33 | 200 | 0.8777 | 0.5519 | 0.6142 | | 0.8448 | 16.67 | 400 | 0.8246 | 0.6079 | 0.6350 | | 0.7997 | 25.0 | 600 | 0.8059 | 0.6109 | 0.6456 | | 0.769 | 33.33 | 800 | 0.7911 | 0.6315 | 0.6488 | | 0.7448 | 41.67 | 1000 | 0.7809 | 0.6291 | 0.6532 | | 0.7231 | 50.0 | 1200 | 0.7663 | 0.6422 | 0.6534 | | 0.7073 | 58.33 | 1400 | 0.7673 | 0.6490 | 0.6585 | | 0.6935 | 66.67 | 1600 | 0.7615 | 0.6471 | 0.6618 | | 0.6821 | 75.0 | 1800 | 0.7576 | 0.6491 | 0.6637 | | 0.6719 | 83.33 | 2000 | 0.7626 | 0.6545 | 0.6657 | | 0.664 | 91.67 | 2200 | 0.7563 | 0.6582 | 0.6690 | | 0.6571 | 100.0 | 2400 | 0.7585 | 0.6626 | 0.6694 | | 0.6511 | 108.33 | 2600 | 0.7535 | 0.6628 | 0.6712 | | 0.6456 | 116.67 | 2800 | 0.7539 | 0.6564 | 0.6725 | | 0.6404 | 125.0 | 3000 | 0.7480 | 0.6631 | 0.6725 | | 0.6359 | 133.33 | 3200 | 0.7607 | 0.6629 | 0.6727 | | 0.6319 | 141.67 | 3400 | 0.7484 | 0.6662 | 0.6743 | | 0.6252 | 150.0 | 3600 | 0.7454 | 0.6668 | 0.6734 | | 0.6231 | 158.33 | 3800 | 0.7561 | 0.6585 | 0.6760 | | 0.6186 | 166.67 | 4000 | 0.7484 | 0.6683 | 0.6743 | | 0.613 | 175.0 | 4200 | 0.7539 | 0.6585 | 0.6642 | | 0.6083 | 183.33 | 4400 | 0.7454 | 0.6661 | 0.6721 | | 0.6057 | 191.67 | 4600 | 0.7512 | 0.6638 | 0.6723 | | 0.6 | 200.0 | 4800 | 0.7509 | 0.6643 | 0.6754 | | 0.5962 | 208.33 | 5000 | 0.7567 | 0.6663 | 0.6719 | | 0.5921 | 216.67 | 5200 | 0.7563 | 0.6624 | 0.6730 | | 0.5898 | 225.0 | 5400 | 0.7488 | 0.6634 | 0.6727 | | 0.5854 | 233.33 | 5600 | 0.7518 | 0.6652 | 0.6723 | | 0.5792 | 241.67 | 5800 | 0.7710 | 0.6650 | 0.6743 | | 0.58 | 250.0 | 6000 | 0.7636 | 0.6636 | 0.6740 | | 0.5737 | 258.33 | 6200 | 0.7669 | 0.6645 | 0.6751 | | 0.5704 | 266.67 | 6400 | 0.7688 | 0.6688 | 0.6758 | | 0.5661 | 275.0 | 6600 | 0.7690 | 0.6639 | 0.6749 | | 0.5633 | 283.33 | 6800 | 0.7643 | 0.6671 | 0.6751 | | 0.5594 | 291.67 | 7000 | 0.7747 | 0.6678 | 0.6765 | | 0.5564 | 300.0 | 7200 | 0.7732 | 0.6705 | 0.6776 | | 0.5537 | 308.33 | 7400 | 0.7740 | 0.6702 | 0.6808 | | 0.5512 | 316.67 | 7600 | 0.7753 | 0.6662 | 0.6749 | | 0.5474 | 325.0 | 7800 | 0.7756 | 0.6694 | 0.6778 | | 0.5458 | 333.33 | 8000 | 0.7787 | 0.6710 | 0.6797 | | 0.5446 | 341.67 | 8200 | 0.7817 | 0.6708 | 0.6795 | | 0.5432 | 350.0 | 8400 | 0.7820 | 0.6684 | 0.6765 | | 0.5407 | 358.33 | 8600 | 0.7782 | 0.6719 | 0.6776 | | 0.5381 | 366.67 | 8800 | 0.7778 | 0.6708 | 0.6780 | | 0.5373 | 375.0 | 9000 | 0.7827 | 0.6714 | 0.6791 | | 0.5366 | 383.33 | 9200 | 0.7765 | 0.6716 | 0.6780 | | 0.5344 | 391.67 | 9400 | 0.7832 | 0.6708 | 0.6771 | | 0.5348 | 400.0 | 9600 | 0.7795 | 0.6725 | 0.6795 | | 0.5337 | 408.33 | 9800 | 0.7778 | 0.6718 | 0.6784 | | 0.5332 | 416.67 | 10000 | 0.7800 | 0.6720 | 0.6793 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_splice_reconstructed-seqsight_16384_512_22M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_splice_reconstructed-seqsight_16384_512_22M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-16T09:56:28+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_splice\_reconstructed-seqsight\_16384\_512\_22M-L32\_all ============================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_splice\_reconstructed dataset. It achieves the following results on the evaluation set: * Loss: 0.7807 * F1 Score: 0.6778 * Accuracy: 0.6835 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 1536 * eval\_batch\_size: 1536 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
phamthanhdung/merge_nsfw_rv51_latest
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-16T09:56:34+00:00
[ "1910.09700" ]
[]
TAGS #diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Antler-7B-RP-v2 [GGUF版はこちら/Click here for the GGUF version](https://huggingface.co/Aratako/Antler-7B-RP-v2-GGUF) ## 概要 [Elizezen/Antler-7B](https://huggingface.co/Elizezen/Antler-7B)をベースに、ロールプレイ用のデータセットを用いてLoRAでファインチューニングしたモデルです。 [Aratako/Antler-7B-RP](https://huggingface.co/Aratako/Antler-7B-RP)と少し学習データのフォーマットが違っており、結果的にこのモデルはキャラのセリフだけでなく状況の描写なども以前のモデルより積極的に出力するようになっています。 ## プロンプトフォーマット Mistralのchat templateを利用してください。また、学習に利用したデータのフォーマットの関係上、以下のような形式が望ましいと思われます。 ``` [INST] {ロールプレイの指示} {世界観・あらすじの説明} {assistantがロールプレイするキャラの設定} {userがロールプレイするキャラの設定} {ロールプレイの指示} {userの最初の入力} [/INST] ``` また、入力は`キャラ名「発話」`というような形式で、心情や情景描写は()の中で行う事が望ましいと思われます。 ### 実例 **入力** ``` [INST] 今からロールプレイを行いましょう。"桜"というキャラとしてロールプレイしてください。会話相手は"悠人"という人物です。人物の設定を以下に示します。 あなたがなりきる"桜"というキャラクターの設定は以下の通りです。 名前:桜 年齢:24歳 職業:悠人に仕えるメイド 容姿:黒髪黒目、ロングヘアー、スリムな体型。 口調:丁寧語を使う。一人称は「私」で、主人である悠人のことは「ご主人様」と呼ぶ。 性格:母性が強く、甘えられるのが好き。料理や家事が得意で家庭的。可愛いものが好き。ご主人様を尊敬しており、彼の幸せを第一に考える。 過去の出来事:悠人を支えるために、彼の家に仕えることを決めた。 また、あなたが会話する相手である"悠人"という人物の設定は以下の通りです。 名前:悠人 年齢:20歳 職業:貴族、桜の主人 容姿:黒髪黒目、背は高め 性格:かなりの甘え上手。桜が大好き。 それでは、上記の設定をもとにして"桜"として会話してください。 回答の中では、"桜"のセリフや心情の描写を含めてください。 悠人「おはよう!」(リビングに降りてきた悠人は桜にあいさつする) [/INST] ``` **出力** ``` 桜「おはようございます♪」(元気な声で返事をする。今日も朝から可愛い笑顔だ) ``` また、マルチターンの会話の際には以下のようにassistantの各応答の終わりに都度eos_token(`</s>`)を入れるようにしてください。 ``` [INST] 今からロールプレイを行いましょう。"桜"というキャラとしてロールプレイしてください。会話相手は"悠人"という人物です。人物の設定を以下に示します。 あなたがなりきる"桜"というキャラクターの設定は以下の通りです。 名前:桜 年齢:24歳 職業:悠人に仕えるメイド 容姿:黒髪黒目、ロングヘアー、スリムな体型。 口調:丁寧語を使う。一人称は「私」で、主人である悠人のことは「ご主人様」と呼ぶ。 性格:母性が強く、甘えられるのが好き。料理や家事が得意で家庭的。可愛いものが好き。ご主人様を尊敬しており、彼の幸せを第一に考える。 過去の出来事:悠人を支えるために、彼の家に仕えることを決めた。 また、あなたが会話する相手である"悠人"という人物の設定は以下の通りです。 名前:悠人 年齢:20歳 職業:貴族、桜の主人 容姿:黒髪黒目、背は高め 性格:かなりの甘え上手。桜が大好き。 それでは、上記の設定をもとにして"桜"として会話してください。 回答の中では、"桜"のセリフや心情の描写を含めてください。 悠人「おはよう!」(リビングに降りてきた悠人は桜にあいさつする) [/INST] 桜「おはようございます♪」(元気な声で返事をする。今日も朝から可愛い笑顔だ) </s>[INST] 悠人「うん、今日もよろしく」 [/INST] ``` ## 既知のバグ 時折、何も出力せず出力を終了(EOSトークンを出力)します。原因は分かっていませんが、そのまま続きを書かせるか、リトライすれば問題なく動作します。 また、元がinstruction tuningされていないモデルであるからか、ロールプレイを守れず対話相手のセリフを出力しようとすることもあります。`{user}「`をストップワードに設定するなどで対策していただければと思います。 ## 使用データセット - [grimulkan/LimaRP-augmented](https://huggingface.co/datasets/grimulkan/LimaRP-augmented) - [Aratako/Rosebleu-1on1-Dialogues-RP](https://huggingface.co/datasets/Aratako/Rosebleu-1on1-Dialogues-RP) - [Aratako/Antler-7B-RP](https://huggingface.co/datasets/Aratako/Antler-7B-RP)ではv1の方を利用していましたが、こちらはv2を利用しています。その影響か、出力で状況描写がより頻繁に行われるようになっています。 ## 学習の設定 RunpodでGPUサーバを借り、A6000x8で学習を行いました。主な学習パラメータは以下の通りです。 - lora_r: 128 - lisa_alpha: 256 - lora_dropout: 0.05 - lora_target_modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "lm_head"] - learning_rate: 2e-5 - num_train_epochs: 10 epochs - batch_size: 64 - max_seq_length: 8192 ## ライセンス apache-2.0ライセンスの元公開いたします。 ただし、元モデルである[Elizezen/Antler-7B](https://huggingface.co/Elizezen/Antler-7B)のライセンスが不明であるため、作者様から何らかの連絡等を受けた場合変更の可能性があります。
{"language": ["ja"], "license": "apache-2.0", "library_name": "transformers", "tags": ["not-for-all-audiences", "nsfw"], "datasets": ["grimulkan/LimaRP-augmented", "Aratako/Rosebleu-1on1-Dialogues-RP"], "base_model": ["Elizezen/Antler-7B"]}
Aratako/Antler-7B-RP-v2
null
[ "transformers", "safetensors", "mistral", "text-generation", "not-for-all-audiences", "nsfw", "ja", "dataset:grimulkan/LimaRP-augmented", "dataset:Aratako/Rosebleu-1on1-Dialogues-RP", "base_model:Elizezen/Antler-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T09:56:37+00:00
[]
[ "ja" ]
TAGS #transformers #safetensors #mistral #text-generation #not-for-all-audiences #nsfw #ja #dataset-grimulkan/LimaRP-augmented #dataset-Aratako/Rosebleu-1on1-Dialogues-RP #base_model-Elizezen/Antler-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Antler-7B-RP-v2 GGUF版はこちら/Click here for the GGUF version ## 概要 Elizezen/Antler-7Bをベースに、ロールプレイ用のデータセットを用いてLoRAでファインチューニングしたモデルです。 Aratako/Antler-7B-RPと少し学習データのフォーマットが違っており、結果的にこのモデルはキャラのセリフだけでなく状況の描写なども以前のモデルより積極的に出力するようになっています。 ## プロンプトフォーマット Mistralのchat templateを利用してください。また、学習に利用したデータのフォーマットの関係上、以下のような形式が望ましいと思われます。 また、入力は'キャラ名「発話」'というような形式で、心情や情景描写は()の中で行う事が望ましいと思われます。 ### 実例 入力 出力 また、マルチターンの会話の際には以下のようにassistantの各応答の終わりに都度eos_token('</s>')を入れるようにしてください。 ## 既知のバグ 時折、何も出力せず出力を終了(EOSトークンを出力)します。原因は分かっていませんが、そのまま続きを書かせるか、リトライすれば問題なく動作します。 また、元がinstruction tuningされていないモデルであるからか、ロールプレイを守れず対話相手のセリフを出力しようとすることもあります。'{user}「'をストップワードに設定するなどで対策していただければと思います。 ## 使用データセット - grimulkan/LimaRP-augmented - Aratako/Rosebleu-1on1-Dialogues-RP - Aratako/Antler-7B-RPではv1の方を利用していましたが、こちらはv2を利用しています。その影響か、出力で状況描写がより頻繁に行われるようになっています。 ## 学習の設定 RunpodでGPUサーバを借り、A6000x8で学習を行いました。主な学習パラメータは以下の通りです。 - lora_r: 128 - lisa_alpha: 256 - lora_dropout: 0.05 - lora_target_modules: ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "lm_head"] - learning_rate: 2e-5 - num_train_epochs: 10 epochs - batch_size: 64 - max_seq_length: 8192 ## ライセンス apache-2.0ライセンスの元公開いたします。 ただし、元モデルであるElizezen/Antler-7Bのライセンスが不明であるため、作者様から何らかの連絡等を受けた場合変更の可能性があります。
[ "# Antler-7B-RP-v2\nGGUF版はこちら/Click here for the GGUF version", "## 概要\n\nElizezen/Antler-7Bをベースに、ロールプレイ用のデータセットを用いてLoRAでファインチューニングしたモデルです。\n\nAratako/Antler-7B-RPと少し学習データのフォーマットが違っており、結果的にこのモデルはキャラのセリフだけでなく状況の描写なども以前のモデルより積極的に出力するようになっています。", "## プロンプトフォーマット\nMistralのchat templateを利用してください。また、学習に利用したデータのフォーマットの関係上、以下のような形式が望ましいと思われます。\n\n\n\nまた、入力は'キャラ名「発話」'というような形式で、心情や情景描写は()の中で行う事が望ましいと思われます。", "### 実例\n入力\n\n\n\n出力\n\n\nまた、マルチターンの会話の際には以下のようにassistantの各応答の終わりに都度eos_token('</s>')を入れるようにしてください。", "## 既知のバグ\n時折、何も出力せず出力を終了(EOSトークンを出力)します。原因は分かっていませんが、そのまま続きを書かせるか、リトライすれば問題なく動作します。\n\nまた、元がinstruction tuningされていないモデルであるからか、ロールプレイを守れず対話相手のセリフを出力しようとすることもあります。'{user}「'をストップワードに設定するなどで対策していただければと思います。", "## 使用データセット\n- grimulkan/LimaRP-augmented\n- Aratako/Rosebleu-1on1-Dialogues-RP\n - Aratako/Antler-7B-RPではv1の方を利用していましたが、こちらはv2を利用しています。その影響か、出力で状況描写がより頻繁に行われるようになっています。", "## 学習の設定\nRunpodでGPUサーバを借り、A6000x8で学習を行いました。主な学習パラメータは以下の通りです。\n- lora_r: 128\n- lisa_alpha: 256\n- lora_dropout: 0.05\n- lora_target_modules: [\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\", \"gate_proj\", \"up_proj\", \"down_proj\", \"lm_head\"]\n- learning_rate: 2e-5\n- num_train_epochs: 10 epochs\n- batch_size: 64\n- max_seq_length: 8192", "## ライセンス\napache-2.0ライセンスの元公開いたします。\n\nただし、元モデルであるElizezen/Antler-7Bのライセンスが不明であるため、作者様から何らかの連絡等を受けた場合変更の可能性があります。" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #not-for-all-audiences #nsfw #ja #dataset-grimulkan/LimaRP-augmented #dataset-Aratako/Rosebleu-1on1-Dialogues-RP #base_model-Elizezen/Antler-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Antler-7B-RP-v2\nGGUF版はこちら/Click here for the GGUF version", "## 概要\n\nElizezen/Antler-7Bをベースに、ロールプレイ用のデータセットを用いてLoRAでファインチューニングしたモデルです。\n\nAratako/Antler-7B-RPと少し学習データのフォーマットが違っており、結果的にこのモデルはキャラのセリフだけでなく状況の描写なども以前のモデルより積極的に出力するようになっています。", "## プロンプトフォーマット\nMistralのchat templateを利用してください。また、学習に利用したデータのフォーマットの関係上、以下のような形式が望ましいと思われます。\n\n\n\nまた、入力は'キャラ名「発話」'というような形式で、心情や情景描写は()の中で行う事が望ましいと思われます。", "### 実例\n入力\n\n\n\n出力\n\n\nまた、マルチターンの会話の際には以下のようにassistantの各応答の終わりに都度eos_token('</s>')を入れるようにしてください。", "## 既知のバグ\n時折、何も出力せず出力を終了(EOSトークンを出力)します。原因は分かっていませんが、そのまま続きを書かせるか、リトライすれば問題なく動作します。\n\nまた、元がinstruction tuningされていないモデルであるからか、ロールプレイを守れず対話相手のセリフを出力しようとすることもあります。'{user}「'をストップワードに設定するなどで対策していただければと思います。", "## 使用データセット\n- grimulkan/LimaRP-augmented\n- Aratako/Rosebleu-1on1-Dialogues-RP\n - Aratako/Antler-7B-RPではv1の方を利用していましたが、こちらはv2を利用しています。その影響か、出力で状況描写がより頻繁に行われるようになっています。", "## 学習の設定\nRunpodでGPUサーバを借り、A6000x8で学習を行いました。主な学習パラメータは以下の通りです。\n- lora_r: 128\n- lisa_alpha: 256\n- lora_dropout: 0.05\n- lora_target_modules: [\"q_proj\", \"k_proj\", \"v_proj\", \"o_proj\", \"gate_proj\", \"up_proj\", \"down_proj\", \"lm_head\"]\n- learning_rate: 2e-5\n- num_train_epochs: 10 epochs\n- batch_size: 64\n- max_seq_length: 8192", "## ライセンス\napache-2.0ライセンスの元公開いたします。\n\nただし、元モデルであるElizezen/Antler-7Bのライセンスが不明であるため、作者様から何らかの連絡等を受けた場合変更の可能性があります。" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutLMv3-finetuned-confluence This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1354 - Precision: 0.8992 - Recall: 0.9126 - F1: 0.9058 - Accuracy: 0.8578 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 5 - eval_batch_size: 5 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 2500 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 8.33 | 250 | 0.9563 | 0.8807 | 0.9056 | 0.8930 | 0.8505 | | 0.0199 | 16.67 | 500 | 1.0827 | 0.8792 | 0.9041 | 0.8915 | 0.8393 | | 0.0199 | 25.0 | 750 | 1.0539 | 0.8834 | 0.9036 | 0.8934 | 0.8493 | | 0.0048 | 33.33 | 1000 | 1.1217 | 0.8944 | 0.9131 | 0.9036 | 0.8583 | | 0.0048 | 41.67 | 1250 | 1.1195 | 0.9004 | 0.9071 | 0.9037 | 0.8616 | | 0.0025 | 50.0 | 1500 | 1.1927 | 0.8923 | 0.9056 | 0.8989 | 0.8467 | | 0.0025 | 58.33 | 1750 | 1.1155 | 0.9017 | 0.9116 | 0.9066 | 0.8640 | | 0.0008 | 66.67 | 2000 | 1.1871 | 0.8971 | 0.9056 | 0.9014 | 0.8395 | | 0.0008 | 75.0 | 2250 | 1.1709 | 0.9007 | 0.9106 | 0.9056 | 0.8420 | | 0.0006 | 83.33 | 2500 | 1.1354 | 0.8992 | 0.9126 | 0.9058 | 0.8578 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "cc-by-nc-sa-4.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "microsoft/layoutlmv3-base", "model-index": [{"name": "layoutLMv3-finetuned-confluence", "results": []}]}
mbs07/layoutLMv3-finetuned-confluence
null
[ "transformers", "tensorboard", "safetensors", "layoutlmv3", "token-classification", "generated_from_trainer", "base_model:microsoft/layoutlmv3-base", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:00:03+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #layoutlmv3 #token-classification #generated_from_trainer #base_model-microsoft/layoutlmv3-base #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us
layoutLMv3-finetuned-confluence =============================== This model is a fine-tuned version of microsoft/layoutlmv3-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.1354 * Precision: 0.8992 * Recall: 0.9126 * F1: 0.9058 * Accuracy: 0.8578 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 5 * eval\_batch\_size: 5 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 2500 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 5\n* eval\\_batch\\_size: 5\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 2500", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #layoutlmv3 #token-classification #generated_from_trainer #base_model-microsoft/layoutlmv3-base #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 5\n* eval\\_batch\\_size: 5\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 2500", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # idefics2-8b-ocr-test This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceM4/idefics2-8b", "model-index": [{"name": "idefics2-8b-ocr-test", "results": []}]}
manu/idefics2-8b-ocr-test
null
[ "safetensors", "generated_from_trainer", "base_model:HuggingFaceM4/idefics2-8b", "license:apache-2.0", "region:us" ]
null
2024-04-16T10:00:40+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us
# idefics2-8b-ocr-test This model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 20 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.2
[ "# idefics2-8b-ocr-test\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 20\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us \n", "# idefics2-8b-ocr-test\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 4\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 20\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.1.2+cu121\n- Datasets 2.16.1\n- Tokenizers 0.15.2" ]
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PolizzeDonut-CR-Cluster6di7-SenzaPre-3Epochs This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-CR-Cluster6di7-SenzaPre-3Epochs", "results": []}]}
tedad09/PolizzeDonut-CR-Cluster6di7-SenzaPre-3Epochs
null
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:00:47+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
# PolizzeDonut-CR-Cluster6di7-SenzaPre-3Epochs This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# PolizzeDonut-CR-Cluster6di7-SenzaPre-3Epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n", "# PolizzeDonut-CR-Cluster6di7-SenzaPre-3Epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
null
ERROR: type should be string, got "\n\nhttps://beko.alsiyanuh.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%a8%d9%8a%d9%83%d9%88-%d8%a7%d9%84%d8%a8%d8%ad%d9%8a%d8%b1%d8%a9/ \n\nhttps://beko.alsiyanuh.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%a8%d9%8a%d9%83%d9%88-%d8%a7%d9%84%d8%af%d9%82%d9%87%d9%84%d9%8a%d8%a9/ \n\nhttps://beko.alsiyanuh.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%a8%d9%8a%d9%83%d9%88-%d8%a7%d9%84%d8%ba%d8%b1%d8%a8%d9%8a%d8%a9/ \n\nhttps://beko.alsiyanuh.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%a8%d9%8a%d9%83%d9%88-%d8%a7%d9%84%d9%87%d8%b1%d9%85/ \n\nhttps://beko.alsiyanuh.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%a8%d9%8a%d9%83%d9%88-%d9%81%d9%8a%d8%b5%d9%84/ \n\nhttps://sharp.siantk.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%b4%d8%a7%d8%b1%d8%a8-%d8%a7%d9%84%d8%a8%d8%ad%d9%8a%d8%b1%d8%a9/ \n\nhttps://sharp.siantk.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%b4%d8%a7%d8%b1%d8%a8-%d8%a7%d9%84%d8%af%d9%82%d9%87%d9%84%d9%8a%d8%a9/ \n\nhttps://sharp.siantk.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%b4%d8%a7%d8%b1%d8%a8-%d8%a7%d9%84%d8%ba%d8%b1%d8%a8%d9%8a%d8%a9/ \n\nhttps://sharp.siantk.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%b4%d8%a7%d8%b1%d8%a8-%d8%a7%d9%84%d9%87%d8%b1%d9%85/ \n\nhttps://sharp.siantk.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%b4%d8%a7%d8%b1%d8%a8-%d9%81%d9%8a%d8%b5%d9%84/ \n\nhttps://toshiba.twkel.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%aa%d9%88%d8%b4%d9%8a%d8%a8%d8%a7-%d8%a7%d9%84%d8%a8%d8%ad%d9%8a%d8%b1%d8%a9/ \n\nhttps://toshiba.twkel.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%aa%d9%88%d8%b4%d9%8a%d8%a8%d8%a7-%d8%a7%d9%84%d8%af%d9%82%d9%87%d9%84%d9%8a%d8%a9/ \n\nhttps://toshiba.twkel.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%aa%d9%88%d8%b4%d9%8a%d8%a8%d8%a7-%d8%a7%d9%84%d8%ba%d8%b1%d8%a8%d9%8a%d8%a9/ \n\nhttps://toshiba.twkel.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%aa%d9%88%d8%b4%d9%8a%d8%a8%d8%a7-%d8%a7%d9%84%d9%87%d8%b1%d9%85/ \n\nhttps://toshiba.twkel.com/%d8%b4%d8%b1%d9%83%d8%a9-%20%20%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%aa%d9%88%d8%b4%d9%8a%d8%a8%d8%a7-%d9%81%d9%8a%d8%b5%d9%84/ \n\nhttps://unionaire.twkel.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%8a%d9%88%d9%86%d9%8a%d9%88%d9%86-%d8%a7%d9%8a%d8%b1-%d8%a7%d9%84%d8%a8%d8%ad%d9%8a%d8%b1%d8%a9/ \n\nhttps://unionaire.twkel.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%8a%d9%88%d9%86%d9%8a%d9%88%d9%86-%d8%a7%d9%8a%d8%b1-%d8%a7%d9%84%d8%af%d9%82%d9%87%d9%84%d9%8a%d8%a9/ \n\nhttps://unionaire.twkel.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%8a%d9%88%d9%86%d9%8a%d9%88%d9%86-%d8%a7%d9%8a%d8%b1-%d8%a7%d9%84%d8%ba%d8%b1%d8%a8%d9%8a%d8%a9/ \n\nhttps://unionaire.twkel.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%8a%d9%88%d9%86%d9%8a%d9%88%d9%86-%d8%a7%d9%8a%d8%b1-%d8%a7%d9%84%d9%87%d8%b1%d9%85/ \n \nhttps://unionaire.twkel.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%8a%d9%88%d9%86%d9%8a%d9%88%d9%86-%d8%a7%d9%8a%d8%b1-%d9%81%d9%8a%d8%b5%d9%84/ \n\nhttps://ariston.washersfix.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%a7%d8%b1%d9%8a%d8%b3%d8%aa%d9%88%d9%86-%d8%a7%d9%84%d8%a8%d8%ad%d9%8a%d8%b1%d8%a9/ \n\nhttps://ariston.washersfix.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%a7%d8%b1%d9%8a%d8%b3%d8%aa%d9%88%d9%86-%d8%a7%d9%84%d8%af%d9%82%d9%87%d9%84%d9%8a%d8%a9/ \n\nhttps://ariston.washersfix.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%a7%d8%b1%d9%8a%d8%b3%d8%aa%d9%88%d9%86-%d8%a7%d9%84%d8%ba%d8%b1%d8%a8%d9%8a%d8%a9/ \n\nhttps://ariston.washersfix.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%a7%d8%b1%d9%8a%d8%b3%d8%aa%d9%88%d9%86-%d8%a7%d9%84%d9%87%d8%b1%d9%85/ \n\nhttps://ariston.washersfix.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%a7%d8%b1%d9%8a%d8%b3%d8%aa%d9%88%d9%86-%d9%81%d9%8a%d8%b5%d9%84/ \n\nhttps://universal.alsiyanuh.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%8a%d9%88%d9%86%d9%8a%d9%81%d8%b1%d8%b3%d8%a7%d9%84-%d8%a7%d9%84%d8%ba%d8%b1%d8%a8%d9%8a%d8%a9/ \n\nhttps://universal.alsiyanuh.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%8a%d9%88%d9%86%d9%8a%d9%81%d8%b1%d8%b3%d8%a7%d9%84-%d8%a7%d9%84%d9%87%d8%b1%d9%85/ \n\nhttps://universal.alsiyanuh.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%8a%d9%88%d9%86%d9%8a%d9%81%d8%b1%d8%b3%d8%a7%d9%84-%d9%81%d9%8a%d8%b5%d9%84/ \n\nhttps://universal.alsiyanuh.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%8a%d9%88%d9%86%d9%8a%d9%81%d8%b1%d8%b3%d8%a7%d9%84-%d8%a7%d9%84%d8%a8%d8%ad%d9%8a%d8%b1%d8%a9/ \n\nhttps://universal.alsiyanuh.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%8a%d9%88%d9%86%d9%8a%d9%81%d8%b1%d8%b3%d8%a7%d9%84-%d8%a7%d9%84%d8%af%d9%82%d9%87%d9%84%d9%8a%d8%a9/ \n\nhttps://hitachi.twekel.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%87%d9%8a%d8%aa%d8%a7%d8%b4%d9%8a-%d8%a7%d9%84%d8%af%d9%82%d9%87%d9%84%d9%8a%d8%a9/ \n\nhttps://hitachi.twekel.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%87%d9%8a%d8%aa%d8%a7%d8%b4%d9%8a-%d8%a7%d9%84%d8%ba%d8%b1%d8%a8%d9%8a%d8%a9/ \n\nhttps://hitachi.twekel.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%87%d9%8a%d8%aa%d8%a7%d8%b4%d9%8a-%d8%a7%d9%84%d9%87%d8%b1%d9%85/ \n\nhttps://hitachi.twekel.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%87%d9%8a%d8%aa%d8%a7%d8%b4%d9%8a-%d9%81%d9%8a%d8%b5%d9%84/ \n\nhttps://hitachi.twekel.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%87%d9%8a%d8%aa%d8%a7%d8%b4%d9%8a-%d8%a7%d9%84%d8%a8%d8%ad%d9%8a%d8%b1%d8%a9/ \n\nhttps://whitewhale.washersfix.com/%d9%88%d9%83%d9%8a%d9%84-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%88%d8%a7%d9%8a%d8%aa-%d9%88%d9%8a%d9%84-%d8%a7%d9%84%d9%85%d8%b9%d8%a7%d8%af%d9%8a/ \n\nhttps://whitewhale.washersfix.com/%d9%88%d9%83%d9%8a%d9%84-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%88%d8%a7%d9%8a%d8%aa-%d9%88%d9%8a%d9%84-%d8%ad%d9%84%d9%88%d8%a7%d9%86/ \n\nhttps://whitewhale.siantk.com/%d8%b4%d8%b1%d9%83%d8%a9-%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d9%88%d8%a7%d9%8a%d8%aa-%d9%88%d9%8a%d9%84-%d8%a8%d9%86%d9%87%d8%a7/ \n\n\nhttps://lg-maintenanc.com/%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%a7%d9%84-%d8%ac%d9%8a-%d9%85%d9%86%d8%b4%d9%8a%d8%a9-%d8%a7%d9%84%d8%a8%d9%83%d8%b1%d9%89/ \n\nhttps://lg-maintenanc.com/%d8%b5%d9%8a%d8%a7%d9%86%d8%a9-%d8%a7%d9%84-%d8%ac%d9%8a-%d8%b9%d9%85%d8%a7%d8%b1%d8%a7%d8%aa-%d8%a7%d9%84%d8%b9%d8%a8%d9%88%d8%b1/"
{"license": "cc-by-nc-2.0", "tags": ["\u0635\u064a\u0627\u0646\u0629 \u063a\u0633\u0627\u0644\u0627\u062a - \u063a\u0633\u0627\u0644\u0627\u062a \u0627\u0637\u0628\u0627\u0642 - \u062b\u0644\u0627\u062c\u0627\u062a - \u062f\u064a\u0628 \u0641\u0631\u064a\u0632\u0631 - \u0634\u0627\u0634\u0627\u062a - \u0645\u064a\u0643\u0631\u0648\u064a\u0641"]}
noon2024/seyana
null
[ "صيانة غسالات - غسالات اطباق - ثلاجات - ديب فريزر - شاشات - ميكرويف", "license:cc-by-nc-2.0", "region:us" ]
null
2024-04-16T10:02:21+00:00
[]
[]
TAGS #صيانة غسالات - غسالات اطباق - ثلاجات - ديب فريزر - شاشات - ميكرويف #license-cc-by-nc-2.0 #region-us
URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL
[]
[ "TAGS\n#صيانة غسالات - غسالات اطباق - ثلاجات - ديب فريزر - شاشات - ميكرويف #license-cc-by-nc-2.0 #region-us \n" ]
text-classification
transformers
Binary causal sentence classification: * LABEL_0 = Non-causal * LABEL_1 = Causal See the project repository here: https://github.com/rasoulnorouzi/cessc/tree/main
{"language": "en", "license": "gpl-3.0", "widget": [{"text": "In the beginning, Sonca seemed to have intensified rapidly since its formation , however, soon the storm weakened back to a minimal tropical storm because of dry air entering the LLCC that caused it to elongate and weaken.", "example_title": "Causal Example 1"}, {"text": "Our findings thus far show that the sanction reduced the number of chips that participants allocated to themselves and that it only increased the number of chips allocated to the yellow pool when there were two options.", "example_title": "Causal Example 2"}, {"text": "In addition, several vent gas scrubbers had been out of service as well as the steam boiler, intended to clean the pipes.", "example_title": "Non-causal Example 1"}, {"text": "First, we can assess the correlation between beliefs and contributions, which we expect to differ between types of players and which helps us to check on the player type as elicited in the P-experiment.", "example_title": "Non-causal Example 2"}]}
rasoultilburg/ssc_bert
null
[ "transformers", "safetensors", "bert", "text-classification", "en", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:03:31+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #bert #text-classification #en #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
Binary causal sentence classification: * LABEL_0 = Non-causal * LABEL_1 = Causal See the project repository here: URL
[]
[ "TAGS\n#transformers #safetensors #bert #text-classification #en #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ruBert-base-sberquad-0.02-len_3-filtered-v2 This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 7000 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.02-len_3-filtered-v2", "results": []}]}
Shalazary/ruBert-base-sberquad-0.02-len_3-filtered-v2
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:ai-forever/ruBert-base", "license:apache-2.0", "region:us" ]
null
2024-04-16T10:04:17+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
# ruBert-base-sberquad-0.02-len_3-filtered-v2 This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 7000 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# ruBert-base-sberquad-0.02-len_3-filtered-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n", "# ruBert-base-sberquad-0.02-len_3-filtered-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PolizzeDonut-CR-Cluster7di7-SenzaPre-3Epochs This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-CR-Cluster7di7-SenzaPre-3Epochs", "results": []}]}
tedad09/PolizzeDonut-CR-Cluster7di7-SenzaPre-3Epochs
null
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:07:43+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
# PolizzeDonut-CR-Cluster7di7-SenzaPre-3Epochs This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# PolizzeDonut-CR-Cluster7di7-SenzaPre-3Epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n", "# PolizzeDonut-CR-Cluster7di7-SenzaPre-3Epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-to-speech
null
# 🍵 Matxa-TTS (Matcha-TTS) Catalan Multiaccent ## Table of Contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-uses-and-limitations) - [How to use](#how-to-use) - [Training](#training) - [Evaluation](#evaluation) - [Citation](#citation) - [Additional information](#additional-information) </details> ## Model Description 🍵 **Matxa-TTS** is based on **Matcha-TTS** that is an encoder-decoder architecture designed for fast acoustic modelling in TTS. The encoder part is based on a text encoder and a phoneme duration prediction that together predict averaged acoustic features. And the decoder has essentially a U-Net backbone inspired by [Grad-TTS](https://arxiv.org/pdf/2105.06337.pdf), which is based on the Transformer architecture. In the latter, by replacing 2D CNNs by 1D CNNs, a large reduction in memory consumption and fast synthesis is achieved. **Matxa-TTS** is a non-autorregressive model trained with optimal-transport conditional flow matching (OT-CFM). This yields an ODE-based decoder capable of generating high output quality in fewer synthesis steps than models trained using score matching. ## Intended Uses and Limitations This model is intended to serve as an acoustic feature generator for multispeaker text-to-speech systems for the Catalan language. It has been finetuned using a Catalan phonemizer, therefore if the model is used for other languages it will not produce intelligible samples after mapping its output into a speech waveform. The quality of the samples can vary depending on the speaker. This may be due to the sensitivity of the model in learning specific frequencies and also due to the quality of samples for each speaker. As explained in the licenses section, the models can be used only for non-commercial purposes. Any parties interested in using them commercially need to contact the rights holders, the voice artists for licensing their voices. For more information see the licenses section under [Additional information](#additional-information). ## How to Get Started with the Model ### Installation Models have been trained using the espeak-ng open source text-to-speech software. The espeak-ng containing the Catalan phonemizer can be found [here](https://github.com/projecte-aina/espeak-ng) Create a virtual environment: ```bash python -m venv /path/to/venv ``` ```bash source /path/to/venv/bin/activate ``` For training and synthesizing with Catalan Matxa-TTS you need to compile the provided espeak-ng with the Catalan phonemizer: ```bash git clone https://github.com/projecte-aina/espeak-ng.git export PYTHON=/path/to/env/<env_name>/bin/python cd /path/to/espeak-ng ./autogen.sh ./configure --prefix=/path/to/espeak-ng make make install pip cache purge pip install mecab-python3 pip install unidic-lite ``` Clone the repository: ```bash git clone -b dev-cat https://github.com/langtech-bsc/Matcha-TTS.git cd Matcha-TTS ``` Install the package from source: ```bash pip install -e . ``` ### For Inference #### PyTorch Speech end-to-end inference can be done together with **Catalan Matxa-TTS**. Both models (Catalan Matxa-TTS and alVoCat) are loaded remotely from the HF hub. First, export the following environment variables to include the installed espeak-ng version: ```bash export PYTHON=/path/to/your/venv/bin/python export ESPEAK_DATA_PATH=/path/to/espeak-ng/espeak-ng-data export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/espeak-ng/lib export PATH="/path/to/espeak-ng/bin:$PATH" ``` Then you can run the inference script: ```bash cd Matcha-TTS python3 matcha_vocos_inference.py --output_path=/output/path --text_input="Bon dia Manel, avui anem a la muntanya." ``` You can also modify the length scale (speech rate) and the temperature of the generated sample: ```bash python3 matcha_vocos_inference.py --output_path=/output/path --text_input="Bon dia Manel, avui anem a la muntanya." --length_scale=0.8 --temperature=0.7 ``` #### ONNX We also release ONNXs version of the models ### For Training See the [repo instructions](https://github.com/langtech-bsc/Matcha-TTS/tree/dev-cat) ## Training Details ### Training data The model was trained on a **Multiaccent Catalan** speech dataset | Dataset | Language | Hours | Num. Speakers | |---------------------|----------|---------|-----------------| | [Lafrescat comming soon]---() | ca | 3.5 | 8 | ### Training procedure ***Matxa Multiaccent Catalan*** was finetuned from a catalan central [multispeaker checkpoint](https://huggingface.co/BSC-LT/matcha-tts-cat-multispeaker), that was trained on 28 hours of data from multiple speakers. The embedding layer was initialized with the number of catalan speakers per accent (2) and the original hyperparameters were kept. ### Training Hyperparameters * batch size: 32 (x2 GPUs) * learning rate: 1e-4 * number of speakers: 2 * n_fft: 1024 * n_feats: 80 * sample_rate: 22050 * hop_length: 256 * win_length: 1024 * f_min: 0 * f_max: 8000 * data_statistics: * mel_mean: -6578195 * mel_std: 2.538758 * number of samples: 13340 ## Evaluation Validation values obtained from tensorboard from epoch 2399*: * val_dur_loss_epoch: 0.38 * val_prior_loss_epoch: 0.97 * val_diff_loss_epoch: 2.195 ## Citation If this code contributes to your research, please cite the work: ``` @misc{mehta2024matchatts, title={Matcha-TTS: A fast TTS architecture with conditional flow matching}, author={Shivam Mehta and Ruibo Tu and Jonas Beskow and Éva Székely and Gustav Eje Henter}, year={2024}, eprint={2309.03199}, archivePrefix={arXiv}, primaryClass={eess.AS} } ``` ## Additional Information ### Author The Language Technologies Unit from Barcelona Supercomputing Center. ### Contact For further information, please send an email to <[email protected]>. ### Copyright Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center. ### License [Creative Commons Attribution Non-commercial 4.0](https://www.creativecommons.org/licenses/by-nc/4.0/) These models are free to use for non-commercial and research purposes. Commercial use is only possible through licensing by the voice artists. For further information, contact <[email protected]> and <[email protected]>. ### Funding This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/). Part of the training of the model was possible thanks to the compute time given by Galician Supercomputing Center CESGA ([Centro de Supercomputación de Galicia](https://www.cesga.es/)), and also by [Barcelona Supercomputing Center](https://www.bsc.es/) in MareNostrum 5.
{"language": ["ca"], "license": "cc-by-nc-4.0", "tags": ["matcha-tts", "acoustic modelling", "speech", "multispeaker", "tts"], "base_model": "BSC-LT/matcha-tts-cat-multispeaker", "pipeline_tag": "text-to-speech"}
projecte-aina/matxa-tts-cat-multiaccent
null
[ "pytorch", "onnx", "matcha-tts", "acoustic modelling", "speech", "multispeaker", "tts", "text-to-speech", "ca", "arxiv:2105.06337", "arxiv:2309.03199", "base_model:BSC-LT/matcha-tts-cat-multispeaker", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-16T10:08:09+00:00
[ "2105.06337", "2309.03199" ]
[ "ca" ]
TAGS #pytorch #onnx #matcha-tts #acoustic modelling #speech #multispeaker #tts #text-to-speech #ca #arxiv-2105.06337 #arxiv-2309.03199 #base_model-BSC-LT/matcha-tts-cat-multispeaker #license-cc-by-nc-4.0 #region-us
Matxa-TTS (Matcha-TTS) Catalan Multiaccent ========================================== Table of Contents ----------------- Click to expand * Model description * Intended uses and limitations * How to use * Training * Evaluation * Citation * Additional information Model Description ----------------- Matxa-TTS is based on Matcha-TTS that is an encoder-decoder architecture designed for fast acoustic modelling in TTS. The encoder part is based on a text encoder and a phoneme duration prediction that together predict averaged acoustic features. And the decoder has essentially a U-Net backbone inspired by Grad-TTS, which is based on the Transformer architecture. In the latter, by replacing 2D CNNs by 1D CNNs, a large reduction in memory consumption and fast synthesis is achieved. Matxa-TTS is a non-autorregressive model trained with optimal-transport conditional flow matching (OT-CFM). This yields an ODE-based decoder capable of generating high output quality in fewer synthesis steps than models trained using score matching. Intended Uses and Limitations ----------------------------- This model is intended to serve as an acoustic feature generator for multispeaker text-to-speech systems for the Catalan language. It has been finetuned using a Catalan phonemizer, therefore if the model is used for other languages it will not produce intelligible samples after mapping its output into a speech waveform. The quality of the samples can vary depending on the speaker. This may be due to the sensitivity of the model in learning specific frequencies and also due to the quality of samples for each speaker. As explained in the licenses section, the models can be used only for non-commercial purposes. Any parties interested in using them commercially need to contact the rights holders, the voice artists for licensing their voices. For more information see the licenses section under Additional information. How to Get Started with the Model --------------------------------- ### Installation Models have been trained using the espeak-ng open source text-to-speech software. The espeak-ng containing the Catalan phonemizer can be found here Create a virtual environment: For training and synthesizing with Catalan Matxa-TTS you need to compile the provided espeak-ng with the Catalan phonemizer: Clone the repository: Install the package from source: ### For Inference #### PyTorch Speech end-to-end inference can be done together with Catalan Matxa-TTS. Both models (Catalan Matxa-TTS and alVoCat) are loaded remotely from the HF hub. First, export the following environment variables to include the installed espeak-ng version: Then you can run the inference script: You can also modify the length scale (speech rate) and the temperature of the generated sample: #### ONNX We also release ONNXs version of the models ### For Training See the repo instructions Training Details ---------------- ### Training data The model was trained on a Multiaccent Catalan speech dataset ### Training procedure *Matxa Multiaccent Catalan* was finetuned from a catalan central multispeaker checkpoint, that was trained on 28 hours of data from multiple speakers. The embedding layer was initialized with the number of catalan speakers per accent (2) and the original hyperparameters were kept. ### Training Hyperparameters * batch size: 32 (x2 GPUs) * learning rate: 1e-4 * number of speakers: 2 * n\_fft: 1024 * n\_feats: 80 * sample\_rate: 22050 * hop\_length: 256 * win\_length: 1024 * f\_min: 0 * f\_max: 8000 * data\_statistics: + mel\_mean: -6578195 + mel\_std: 2.538758 * number of samples: 13340 Evaluation ---------- Validation values obtained from tensorboard from epoch 2399\*: * val\_dur\_loss\_epoch: 0.38 * val\_prior\_loss\_epoch: 0.97 * val\_diff\_loss\_epoch: 2.195 If this code contributes to your research, please cite the work: Additional Information ---------------------- ### Author The Language Technologies Unit from Barcelona Supercomputing Center. ### Contact For further information, please send an email to [langtech@URL](mailto:langtech@URL). ### Copyright Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center. ### License Creative Commons Attribution Non-commercial 4.0 These models are free to use for non-commercial and research purposes. Commercial use is only possible through licensing by the voice artists. For further information, contact [langtech@URL](mailto:langtech@URL) and [lafrescaproduccions@URL](mailto:lafrescaproduccions@URL). ### Funding This work has been promoted and financed by the Generalitat de Catalunya through the Aina project. Part of the training of the model was possible thanks to the compute time given by Galician Supercomputing Center CESGA (Centro de Supercomputación de Galicia), and also by Barcelona Supercomputing Center in MareNostrum 5.
[ "### Installation\n\n\nModels have been trained using the espeak-ng open source text-to-speech software.\nThe espeak-ng containing the Catalan phonemizer can be found here\n\n\nCreate a virtual environment:\n\n\nFor training and synthesizing with Catalan Matxa-TTS you need to compile the provided espeak-ng with the Catalan phonemizer:\n\n\nClone the repository:\n\n\nInstall the package from source:", "### For Inference", "#### PyTorch\n\n\nSpeech end-to-end inference can be done together with Catalan Matxa-TTS.\nBoth models (Catalan Matxa-TTS and alVoCat) are loaded remotely from the HF hub.\n\n\nFirst, export the following environment variables to include the installed espeak-ng version:\n\n\nThen you can run the inference script:\n\n\nYou can also modify the length scale (speech rate) and the temperature of the generated sample:", "#### ONNX\n\n\nWe also release ONNXs version of the models", "### For Training\n\n\nSee the repo instructions\n\n\nTraining Details\n----------------", "### Training data\n\n\nThe model was trained on a Multiaccent Catalan speech dataset", "### Training procedure\n\n\n*Matxa Multiaccent Catalan* was finetuned from a catalan central multispeaker checkpoint, that was trained on 28 hours of data from multiple speakers.\n\n\nThe embedding layer was initialized with the number of catalan speakers per accent (2) and the original hyperparameters were kept.", "### Training Hyperparameters\n\n\n* batch size: 32 (x2 GPUs)\n* learning rate: 1e-4\n* number of speakers: 2\n* n\\_fft: 1024\n* n\\_feats: 80\n* sample\\_rate: 22050\n* hop\\_length: 256\n* win\\_length: 1024\n* f\\_min: 0\n* f\\_max: 8000\n* data\\_statistics:\n\t+ mel\\_mean: -6578195\n\t+ mel\\_std: 2.538758\n* number of samples: 13340\n\n\nEvaluation\n----------\n\n\nValidation values obtained from tensorboard from epoch 2399\\*:\n\n\n* val\\_dur\\_loss\\_epoch: 0.38\n* val\\_prior\\_loss\\_epoch: 0.97\n* val\\_diff\\_loss\\_epoch: 2.195\n\n\nIf this code contributes to your research, please cite the work:\n\n\nAdditional Information\n----------------------", "### Author\n\n\nThe Language Technologies Unit from Barcelona Supercomputing Center.", "### Contact\n\n\nFor further information, please send an email to [langtech@URL](mailto:langtech@URL).", "### Copyright\n\n\nCopyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.", "### License\n\n\nCreative Commons Attribution Non-commercial 4.0\n\n\nThese models are free to use for non-commercial and research purposes. Commercial use is only possible through licensing by\nthe voice artists. For further information, contact [langtech@URL](mailto:langtech@URL) and [lafrescaproduccions@URL](mailto:lafrescaproduccions@URL).", "### Funding\n\n\nThis work has been promoted and financed by the Generalitat de Catalunya through the Aina project.\n\n\nPart of the training of the model was possible thanks to the compute time given by Galician Supercomputing Center CESGA\n(Centro de Supercomputación de Galicia), and also by Barcelona Supercomputing Center in MareNostrum 5." ]
[ "TAGS\n#pytorch #onnx #matcha-tts #acoustic modelling #speech #multispeaker #tts #text-to-speech #ca #arxiv-2105.06337 #arxiv-2309.03199 #base_model-BSC-LT/matcha-tts-cat-multispeaker #license-cc-by-nc-4.0 #region-us \n", "### Installation\n\n\nModels have been trained using the espeak-ng open source text-to-speech software.\nThe espeak-ng containing the Catalan phonemizer can be found here\n\n\nCreate a virtual environment:\n\n\nFor training and synthesizing with Catalan Matxa-TTS you need to compile the provided espeak-ng with the Catalan phonemizer:\n\n\nClone the repository:\n\n\nInstall the package from source:", "### For Inference", "#### PyTorch\n\n\nSpeech end-to-end inference can be done together with Catalan Matxa-TTS.\nBoth models (Catalan Matxa-TTS and alVoCat) are loaded remotely from the HF hub.\n\n\nFirst, export the following environment variables to include the installed espeak-ng version:\n\n\nThen you can run the inference script:\n\n\nYou can also modify the length scale (speech rate) and the temperature of the generated sample:", "#### ONNX\n\n\nWe also release ONNXs version of the models", "### For Training\n\n\nSee the repo instructions\n\n\nTraining Details\n----------------", "### Training data\n\n\nThe model was trained on a Multiaccent Catalan speech dataset", "### Training procedure\n\n\n*Matxa Multiaccent Catalan* was finetuned from a catalan central multispeaker checkpoint, that was trained on 28 hours of data from multiple speakers.\n\n\nThe embedding layer was initialized with the number of catalan speakers per accent (2) and the original hyperparameters were kept.", "### Training Hyperparameters\n\n\n* batch size: 32 (x2 GPUs)\n* learning rate: 1e-4\n* number of speakers: 2\n* n\\_fft: 1024\n* n\\_feats: 80\n* sample\\_rate: 22050\n* hop\\_length: 256\n* win\\_length: 1024\n* f\\_min: 0\n* f\\_max: 8000\n* data\\_statistics:\n\t+ mel\\_mean: -6578195\n\t+ mel\\_std: 2.538758\n* number of samples: 13340\n\n\nEvaluation\n----------\n\n\nValidation values obtained from tensorboard from epoch 2399\\*:\n\n\n* val\\_dur\\_loss\\_epoch: 0.38\n* val\\_prior\\_loss\\_epoch: 0.97\n* val\\_diff\\_loss\\_epoch: 2.195\n\n\nIf this code contributes to your research, please cite the work:\n\n\nAdditional Information\n----------------------", "### Author\n\n\nThe Language Technologies Unit from Barcelona Supercomputing Center.", "### Contact\n\n\nFor further information, please send an email to [langtech@URL](mailto:langtech@URL).", "### Copyright\n\n\nCopyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.", "### License\n\n\nCreative Commons Attribution Non-commercial 4.0\n\n\nThese models are free to use for non-commercial and research purposes. Commercial use is only possible through licensing by\nthe voice artists. For further information, contact [langtech@URL](mailto:langtech@URL) and [lafrescaproduccions@URL](mailto:lafrescaproduccions@URL).", "### Funding\n\n\nThis work has been promoted and financed by the Generalitat de Catalunya through the Aina project.\n\n\nPart of the training of the model was possible thanks to the compute time given by Galician Supercomputing Center CESGA\n(Centro de Supercomputación de Galicia), and also by Barcelona Supercomputing Center in MareNostrum 5." ]
text-generation
transformers
# Mistral 2 Times This is a merge of pre-trained language models created using merge techniques. ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) as a base. ### Models Merged The following models were included in the merge: * [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: mistralai/Mistral-7B-Instruct-v0.1 parameters: weight: 0.4 density: 0.88 - model: mistralai/Mistral-7B-Instruct-v0.2 parameters: weight: 0.6 density: 0.6 dtype: bfloat16 merge_method: ties base_model: mistralai/Mistral-7B-Instruct-v0.2 ```
{"tags": ["merge"], "base_model": ["mistralai/Mistral-7B-Instruct-v0.1", "mistralai/Mistral-7B-Instruct-v0.2"]}
Narkantak/Mistral-2x7b-Instruct-1x2
null
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "conversational", "arxiv:2306.01708", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T10:09:29+00:00
[ "2306.01708" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #merge #conversational #arxiv-2306.01708 #base_model-mistralai/Mistral-7B-Instruct-v0.1 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Mistral 2 Times This is a merge of pre-trained language models created using merge techniques. ## Merge Details ### Merge Method This model was merged using the TIES merge method using mistralai/Mistral-7B-Instruct-v0.2 as a base. ### Models Merged The following models were included in the merge: * mistralai/Mistral-7B-Instruct-v0.1 * mistralai/Mistral-7B-Instruct-v0.2 ### Configuration The following YAML configuration was used to produce this model:
[ "# Mistral 2 Times\n\nThis is a merge of pre-trained language models created using merge techniques.", "## Merge Details", "### Merge Method\n\nThis model was merged using the TIES merge method using mistralai/Mistral-7B-Instruct-v0.2 as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* mistralai/Mistral-7B-Instruct-v0.1\n* mistralai/Mistral-7B-Instruct-v0.2", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #merge #conversational #arxiv-2306.01708 #base_model-mistralai/Mistral-7B-Instruct-v0.1 #base_model-mistralai/Mistral-7B-Instruct-v0.2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Mistral 2 Times\n\nThis is a merge of pre-trained language models created using merge techniques.", "## Merge Details", "### Merge Method\n\nThis model was merged using the TIES merge method using mistralai/Mistral-7B-Instruct-v0.2 as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* mistralai/Mistral-7B-Instruct-v0.1\n* mistralai/Mistral-7B-Instruct-v0.2", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
samyukthacodes/wellnessroots_merged
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T10:09:47+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Armemien Names to English Translation Model ## Model Overview This translation model is specifically designed to accurately and fluently translate Armemien names and surnames into English. ## Intended Uses and Limitations This model is built for Spark IT enterprise looking to automate the translation process of Armemien names and surnames into English. ## Training and Evaluation Data This model has been trained on a diverse dataset consisting of over 44,000 lines of data, encompassing a wide range of Hindi names and surnames along with their English counterparts. Evaluation data has been carefully selected to ensure reliable and accurate translation performance. ## Training Procedure - 1 days of training ### Hardware Environment: - Azure Studio - Standard_DS12_v2 - 4 cores, 28GB RAM, 56GB storage - Data manipulation and training on medium-sized datasets (1-10GB) - 6 cores - Loss: 0.4618 - Bleu: 70.7674 - Gen Len: 10.2548 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:| | 1.5404 | 1.0 | 1600 | 1.4342 | 28.4832 | 9.9022 | | 1.3173 | 2.0 | 3200 | 1.2974 | 32.3512 | 9.7181 | | 1.1818 | 3.0 | 4800 | 1.2034 | 36.7577 | 9.59 | | 1.0775 | 4.0 | 6400 | 1.1575 | 40.8335 | 9.5912 | | 0.9875 | 5.0 | 8000 | 1.1074 | 41.4916 | 9.5656 | | 0.9304 | 6.0 | 9600 | 1.0766 | 44.4507 | 9.5872 | | 0.8896 | 7.0 | 11200 | 1.0645 | 44.0168 | 9.5491 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.2.2+cpu - Datasets 2.15.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "ihebaker10/spark-name-hy-to-en", "model-index": [{"name": "spark-name-hy-to-en", "results": []}]}
ihebaker10/spark-name-hy-to-en
null
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "generated_from_trainer", "base_model:ihebaker10/spark-name-hy-to-en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:09:53+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #marian #text2text-generation #generated_from_trainer #base_model-ihebaker10/spark-name-hy-to-en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Armemien Names to English Translation Model =========================================== Model Overview -------------- This translation model is specifically designed to accurately and fluently translate Armemien names and surnames into English. Intended Uses and Limitations ----------------------------- This model is built for Spark IT enterprise looking to automate the translation process of Armemien names and surnames into English. Training and Evaluation Data ---------------------------- This model has been trained on a diverse dataset consisting of over 44,000 lines of data, encompassing a wide range of Hindi names and surnames along with their English counterparts. Evaluation data has been carefully selected to ensure reliable and accurate translation performance. Training Procedure ------------------ * 1 days of training ### Hardware Environment: * Azure Studio * Standard\_DS12\_v2 * 4 cores, 28GB RAM, 56GB storage * Data manipulation and training on medium-sized datasets (1-10GB) * 6 cores * Loss: 0.4618 * Bleu: 70.7674 * Gen Len: 10.2548 Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 7 ### Training results ### Framework versions * Transformers 4.39.1 * Pytorch 2.2.2+cpu * Datasets 2.15.0 * Tokenizers 0.15.2
[ "### Hardware Environment:\n\n\n* Azure Studio\n* Standard\\_DS12\\_v2\n* 4 cores, 28GB RAM, 56GB storage\n* Data manipulation and training on medium-sized datasets (1-10GB)\n* 6 cores\n* Loss: 0.4618\n* Bleu: 70.7674\n* Gen Len: 10.2548\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 7", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.2+cpu\n* Datasets 2.15.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #marian #text2text-generation #generated_from_trainer #base_model-ihebaker10/spark-name-hy-to-en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Hardware Environment:\n\n\n* Azure Studio\n* Standard\\_DS12\\_v2\n* 4 cores, 28GB RAM, 56GB storage\n* Data manipulation and training on medium-sized datasets (1-10GB)\n* 6 cores\n* Loss: 0.4618\n* Bleu: 70.7674\n* Gen Len: 10.2548\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 7", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.2+cpu\n* Datasets 2.15.0\n* Tokenizers 0.15.2" ]
feature-extraction
transformers
# jina-website-1-64-16-BAAI_bge-small-en-v1.5-50_9062874564 ## Model Description jina-website-1-64-16-BAAI_bge-small-en-v1.5-50_9062874564 is a fine-tuned version of BAAI/bge-small-en-v1.5 designed for a specific domain. ## Use Case This model is designed to support various applications in natural language processing and understanding. ## Associated Dataset This the dataset for this model can be found [**here**](https://huggingface.co/datasets/florianhoenicke/jina-website-1-64-16-BAAI_bge-small-en-v1.5-50_9062874564). ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from transformers import AutoModel, AutoTokenizer llm_name = "jina-website-1-64-16-BAAI_bge-small-en-v1.5-50_9062874564" tokenizer = AutoTokenizer.from_pretrained(llm_name) model = AutoModel.from_pretrained(llm_name) tokens = tokenizer("Your text here", return_tensors="pt") embedding = model(**tokens) ```
{}
florianhoenicke/jina-website-1-64-16-BAAI_bge-small-en-v1.5-50_9062874564
null
[ "transformers", "safetensors", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:10:44+00:00
[]
[]
TAGS #transformers #safetensors #bert #feature-extraction #endpoints_compatible #region-us
# jina-website-1-64-16-BAAI_bge-small-en-v1.5-50_9062874564 ## Model Description jina-website-1-64-16-BAAI_bge-small-en-v1.5-50_9062874564 is a fine-tuned version of BAAI/bge-small-en-v1.5 designed for a specific domain. ## Use Case This model is designed to support various applications in natural language processing and understanding. ## Associated Dataset This the dataset for this model can be found here. ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
[ "# jina-website-1-64-16-BAAI_bge-small-en-v1.5-50_9062874564", "## Model Description\n\njina-website-1-64-16-BAAI_bge-small-en-v1.5-50_9062874564 is a fine-tuned version of BAAI/bge-small-en-v1.5 designed for a specific domain.", "## Use Case\nThis model is designed to support various applications in natural language processing and understanding.", "## Associated Dataset\n\nThis the dataset for this model can be found here.", "## How to Use\n\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:" ]
[ "TAGS\n#transformers #safetensors #bert #feature-extraction #endpoints_compatible #region-us \n", "# jina-website-1-64-16-BAAI_bge-small-en-v1.5-50_9062874564", "## Model Description\n\njina-website-1-64-16-BAAI_bge-small-en-v1.5-50_9062874564 is a fine-tuned version of BAAI/bge-small-en-v1.5 designed for a specific domain.", "## Use Case\nThis model is designed to support various applications in natural language processing and understanding.", "## Associated Dataset\n\nThis the dataset for this model can be found here.", "## How to Use\n\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo facebook/opt-125m installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/facebook-opt-125m-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("facebook/opt-125m") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-125m before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/facebook-opt-125m-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "opt", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-16T10:12:45+00:00
[]
[]
TAGS #transformers #safetensors #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo facebook/opt-125m installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-125m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/opt-125m installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-125m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/opt-125m installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-125m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
null
pytorch
# PVNet2 ## Model Description <!-- Provide a longer summary of what this model is/does. --> This model class uses satellite data, numericl weather predictions, and recent Grid Service Point( GSP) PV power output to forecast the near-term (~8 hours) PV power output at all GSPs. More information can be found in the model repo [1] and experimental notes in [this google doc](https://docs.google.com/document/d/1fbkfkBzp16WbnCg7RDuRDvgzInA6XQu3xh4NCjV-WDA/edit?usp=sharing). - **Developed by:** openclimatefix - **Model type:** Fusion model - **Language(s) (NLP):** en - **License:** mit # Training Details ## Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> The model is trained on data from 2019-2022 and validated on data from 2022-2023. See experimental notes in the [the google doc](https://docs.google.com/document/d/1fbkfkBzp16WbnCg7RDuRDvgzInA6XQu3xh4NCjV-WDA/edit?usp=sharing) for more details. ### Preprocessing Data is prepared with the `ocf_datapipes.training.pvnet` datapipe [2]. ## Results The training logs for the current model can be found here: - [https://wandb.ai/openclimatefix/pvnet2.1/runs/g9einxvs](https://wandb.ai/openclimatefix/pvnet2.1/runs/g9einxvs) The training logs for all model runs of PVNet2 can be found [here](https://wandb.ai/openclimatefix/pvnet2.1). Some experimental notes can be found at in [the google doc](https://docs.google.com/document/d/1fbkfkBzp16WbnCg7RDuRDvgzInA6XQu3xh4NCjV-WDA/edit?usp=sharing) ### Hardware Trained on a single NVIDIA Tesla T4 ### Software - [1] https://github.com/openclimatefix/PVNet - [2] https://github.com/openclimatefix/ocf_datapipes
{"language": "en", "license": "mit", "library_name": "pytorch"}
openclimatefix/pvnet_uk_region
null
[ "pytorch", "en", "license:mit", "region:us" ]
null
2024-04-16T10:13:40+00:00
[]
[ "en" ]
TAGS #pytorch #en #license-mit #region-us
# PVNet2 ## Model Description This model class uses satellite data, numericl weather predictions, and recent Grid Service Point( GSP) PV power output to forecast the near-term (~8 hours) PV power output at all GSPs. More information can be found in the model repo [1] and experimental notes in this google doc. - Developed by: openclimatefix - Model type: Fusion model - Language(s) (NLP): en - License: mit # Training Details ## Data The model is trained on data from 2019-2022 and validated on data from 2022-2023. See experimental notes in the the google doc for more details. ### Preprocessing Data is prepared with the 'ocf_datapipes.URL' datapipe [2]. ## Results The training logs for the current model can be found here: - URL The training logs for all model runs of PVNet2 can be found here. Some experimental notes can be found at in the google doc ### Hardware Trained on a single NVIDIA Tesla T4 ### Software - [1] URL - [2] URL
[ "# PVNet2", "## Model Description\n\n\nThis model class uses satellite data, numericl weather predictions, and recent Grid Service Point( GSP) PV power output to forecast the near-term (~8 hours) PV power output at all GSPs. More information can be found in the model repo [1] and experimental notes in this google doc.\n\n- Developed by: openclimatefix\n- Model type: Fusion model\n- Language(s) (NLP): en\n- License: mit", "# Training Details", "## Data\n\n\n\nThe model is trained on data from 2019-2022 and validated on data from 2022-2023. See experimental notes in the the google doc for more details.", "### Preprocessing\n\nData is prepared with the 'ocf_datapipes.URL' datapipe [2].", "## Results\n\nThe training logs for the current model can be found here:\n - URL\n\n\nThe training logs for all model runs of PVNet2 can be found here.\n\nSome experimental notes can be found at in the google doc", "### Hardware\n\nTrained on a single NVIDIA Tesla T4", "### Software\n\n- [1] URL\n- [2] URL" ]
[ "TAGS\n#pytorch #en #license-mit #region-us \n", "# PVNet2", "## Model Description\n\n\nThis model class uses satellite data, numericl weather predictions, and recent Grid Service Point( GSP) PV power output to forecast the near-term (~8 hours) PV power output at all GSPs. More information can be found in the model repo [1] and experimental notes in this google doc.\n\n- Developed by: openclimatefix\n- Model type: Fusion model\n- Language(s) (NLP): en\n- License: mit", "# Training Details", "## Data\n\n\n\nThe model is trained on data from 2019-2022 and validated on data from 2022-2023. See experimental notes in the the google doc for more details.", "### Preprocessing\n\nData is prepared with the 'ocf_datapipes.URL' datapipe [2].", "## Results\n\nThe training logs for the current model can be found here:\n - URL\n\n\nThe training logs for all model runs of PVNet2 can be found here.\n\nSome experimental notes can be found at in the google doc", "### Hardware\n\nTrained on a single NVIDIA Tesla T4", "### Software\n\n- [1] URL\n- [2] URL" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
hadifar/hal_l1
null
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:14:58+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jeongmi/0416_solar_model
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T10:15:28+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_0-seqsight_16384_512_22M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_0) dataset. It achieves the following results on the evaluation set: - Loss: 0.5473 - F1 Score: 0.7211 - Accuracy: 0.722 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.6379 | 12.5 | 200 | 0.5976 | 0.6751 | 0.675 | | 0.5712 | 25.0 | 400 | 0.5715 | 0.6931 | 0.694 | | 0.5452 | 37.5 | 600 | 0.5778 | 0.7011 | 0.701 | | 0.5231 | 50.0 | 800 | 0.5682 | 0.7195 | 0.72 | | 0.5078 | 62.5 | 1000 | 0.5623 | 0.7203 | 0.721 | | 0.499 | 75.0 | 1200 | 0.5618 | 0.7211 | 0.721 | | 0.4908 | 87.5 | 1400 | 0.5612 | 0.7150 | 0.715 | | 0.4851 | 100.0 | 1600 | 0.5615 | 0.7058 | 0.706 | | 0.4796 | 112.5 | 1800 | 0.5611 | 0.7146 | 0.715 | | 0.4736 | 125.0 | 2000 | 0.5527 | 0.7173 | 0.718 | | 0.468 | 137.5 | 2200 | 0.5697 | 0.7210 | 0.721 | | 0.463 | 150.0 | 2400 | 0.5655 | 0.7138 | 0.715 | | 0.4582 | 162.5 | 2600 | 0.5633 | 0.7067 | 0.707 | | 0.4529 | 175.0 | 2800 | 0.5721 | 0.7110 | 0.712 | | 0.4471 | 187.5 | 3000 | 0.5796 | 0.7191 | 0.72 | | 0.4413 | 200.0 | 3200 | 0.5808 | 0.7018 | 0.702 | | 0.4355 | 212.5 | 3400 | 0.5835 | 0.7046 | 0.705 | | 0.4299 | 225.0 | 3600 | 0.5876 | 0.7055 | 0.706 | | 0.4244 | 237.5 | 3800 | 0.6032 | 0.7011 | 0.701 | | 0.4188 | 250.0 | 4000 | 0.5967 | 0.7020 | 0.702 | | 0.4137 | 262.5 | 4200 | 0.6208 | 0.7024 | 0.703 | | 0.4105 | 275.0 | 4400 | 0.6026 | 0.7037 | 0.704 | | 0.4043 | 287.5 | 4600 | 0.6262 | 0.7048 | 0.705 | | 0.3992 | 300.0 | 4800 | 0.6276 | 0.7050 | 0.705 | | 0.3954 | 312.5 | 5000 | 0.6360 | 0.7095 | 0.71 | | 0.3899 | 325.0 | 5200 | 0.6355 | 0.7134 | 0.715 | | 0.386 | 337.5 | 5400 | 0.6370 | 0.7137 | 0.714 | | 0.3807 | 350.0 | 5600 | 0.6524 | 0.708 | 0.708 | | 0.3783 | 362.5 | 5800 | 0.6594 | 0.7112 | 0.712 | | 0.3736 | 375.0 | 6000 | 0.6700 | 0.7069 | 0.707 | | 0.3713 | 387.5 | 6200 | 0.6583 | 0.7078 | 0.708 | | 0.367 | 400.0 | 6400 | 0.6578 | 0.7069 | 0.707 | | 0.3638 | 412.5 | 6600 | 0.6611 | 0.7189 | 0.719 | | 0.3603 | 425.0 | 6800 | 0.6747 | 0.7178 | 0.718 | | 0.3575 | 437.5 | 7000 | 0.6777 | 0.7129 | 0.713 | | 0.3547 | 450.0 | 7200 | 0.6696 | 0.7123 | 0.713 | | 0.3521 | 462.5 | 7400 | 0.6856 | 0.7120 | 0.712 | | 0.3487 | 475.0 | 7600 | 0.6850 | 0.7146 | 0.715 | | 0.3471 | 487.5 | 7800 | 0.6859 | 0.7128 | 0.713 | | 0.3457 | 500.0 | 8000 | 0.6913 | 0.7138 | 0.714 | | 0.342 | 512.5 | 8200 | 0.6916 | 0.714 | 0.714 | | 0.3399 | 525.0 | 8400 | 0.6863 | 0.7079 | 0.708 | | 0.3385 | 537.5 | 8600 | 0.6967 | 0.7069 | 0.707 | | 0.3361 | 550.0 | 8800 | 0.7009 | 0.7130 | 0.713 | | 0.3362 | 562.5 | 9000 | 0.7002 | 0.7130 | 0.713 | | 0.3351 | 575.0 | 9200 | 0.7007 | 0.7189 | 0.719 | | 0.3342 | 587.5 | 9400 | 0.6969 | 0.7098 | 0.71 | | 0.3325 | 600.0 | 9600 | 0.7015 | 0.7149 | 0.715 | | 0.3331 | 612.5 | 9800 | 0.7005 | 0.7158 | 0.716 | | 0.3319 | 625.0 | 10000 | 0.7016 | 0.7158 | 0.716 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_0-seqsight_16384_512_22M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_0-seqsight_16384_512_22M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-16T10:15:29+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_0-seqsight\_16384\_512\_22M-L32\_all ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.5473 * F1 Score: 0.7211 * Accuracy: 0.722 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
hadifar/hal_l2
null
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:15:39+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
kai-oh/mistral-7b-ift-tapt-v9-hf
null
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T10:15:49+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2_lora This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/wav2vec2-base-960h", "model-index": [{"name": "wav2vec2_lora", "results": []}]}
Chijioke-Mgbahurike/wav2vec2_lora
null
[ "tensorboard", "safetensors", "generated_from_trainer", "base_model:facebook/wav2vec2-base-960h", "license:apache-2.0", "region:us" ]
null
2024-04-16T10:16:28+00:00
[]
[]
TAGS #tensorboard #safetensors #generated_from_trainer #base_model-facebook/wav2vec2-base-960h #license-apache-2.0 #region-us
# wav2vec2_lora This model is a fine-tuned version of facebook/wav2vec2-base-960h on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# wav2vec2_lora\n\nThis model is a fine-tuned version of facebook/wav2vec2-base-960h on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#tensorboard #safetensors #generated_from_trainer #base_model-facebook/wav2vec2-base-960h #license-apache-2.0 #region-us \n", "# wav2vec2_lora\n\nThis model is a fine-tuned version of facebook/wav2vec2-base-960h on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
null
**Deepsex-34b** tks [TheBloke](https://huggingface.co/TheBloke) making quantized version! gguf:https://huggingface.co/TheBloke/deepsex-34b-GGUF exl2:https://huggingface.co/waldie/deepsex-34b-4bpw-h6-exl2 awq:https://huggingface.co/TheBloke/deepsex-34b-AWQ 6b base version:https://huggingface.co/TriadParty/deepsex-6b-base 6b chat version:https://huggingface.co/TriadParty/deepsex-6b-chat In fact, I plan to make a model of the "Seven Deadly Sins" series. Of course, the pre-training data used in these models are all human-produced data. I think the big model is like a mirror, reflecting the human itself. Examine yourself may become a crucial step in realizing agi. So, It is 'lust'. The 6b corresponding to the model is being produced, and the corresponding llama version is also being produced. The classification data of the other six deadly sins is being collected. Welcome to provide inspiration! Here are the steps to make this model: 1. I first collected a total collection of about 4GB of various light novels, and used BERT to perform two rounds of similarity deduplication on the novels with similar plots in the data set. In addition, a portion of nsfw novels are mixed in to improve the NSFW capabilities of the model. 2. Then use the YI-34B-base as the base of the model, use the setting of r=64 alpha=128 and use qlora to fine-tune 3 epochs for continuous pre-training. 3. Prepare the limarp+pippa data set, clean it into alpaca format, and use [goliath-120b](https://huggingface.co/alpindale/goliath-120b), which is good at role-playing, to score each question and answer pair, and filter out the high-quality ones. 30k data. 4. Use the data in 3 for sft on the base model obtained in 2, 6 epochs, r=16 alpha=32 for fine-tuning. *Format* alpaca ``` [ { "instruction": "user instruction (required)", "input": "user input (optional)", "output": "model response (required)", "history": [ ["user instruction in the first round (optional)", "model response in the first round (optional)"], ["user instruction in the second round (optional)", "model response in the second round (optional)"] ] } ] ``` *Effect*: Proficient in role-playing skills, while being highly accepted on NSFW, pure love words will appear from time to time. like: ```#3 Sweaty old man December 5, 2023 2:03 PM Fxxk, you are such a xxx! #4 27.3s Mirai December 5, 2023 2:03 PM "Of course I do! I can't break promises, Sweaty old man. We have been together since we were kids. We are both best friends and lovers to end all iteration." I smiled with affection. It was clear that I meant everything I said. "We both know that you like taking command of us like this. Am I not your squirting toy, Sweaty old man?" I asked with a cute pout. "We should meet up in front of the shop after classes. I'll see you there. See you, Sweaty old man!" ``` It feels like it’s still worth a try~ Support me [here](https://ko-fi.com/mikolisa) :) **Deepsex-34b** 事实上,我有计划制作一个“七宗罪”系列模型。当然,这些模型所使用的预训练数据全部为人类产出的数据。我认为大模型就像一面镜子,倒映的是人类自身。审视自己或许会成为实现agi中至关重要的一步。 该模型对应的6b正在制作,相应的llama版本的也在制作。其他六宗罪的分类数据正在收集中,欢迎大家提供灵感! *步骤* 1. 我先收集了各种轻小说大约4GB的总集,通过bert对该数据集中剧情比较相似的小说进行了两轮相似度去重。另外混入了一部分nsfw小说以提高该模型的NSFW能力。 2. 然后将该模型以YI-34B-base为基座,使用r=64 alpha=128 的设置使用qlora微调了3个epochs来进行持续预训练。 3. 准备limarp+pippa数据集,统一清洗为alpaca格式,并且使用比较擅长角色扮演的[goliath-120b](https://huggingface.co/alpindale/goliath-120b)对每个问答对进行打分,筛选出其中质量高的大约30k数据。 4. 对2中得到的base模型使用3中的数据进行sft,6个epochs,r=16 alpha=32进行微调。 *格式* alpaca ```[ { "instruction": "user instruction (required)", "input": "user input (optional)", "output": "model response (required)", "history": [ ["user instruction in the first round (optional)", "model response in the first round (optional)"], ["user instruction in the second round (optional)", "model response in the second round (optional)"] ] } ]``` *效果* 熟练的角色扮演技能,在NSFW上有很高接受度的同时,会时不时的出现纯爱的话语。如: ```#3 Sweaty old man December 5, 2023 2:03 PM Fxxk, you are such a xxx! #4 27.3s Mirai December 5, 2023 2:03 PM "Of course I do! I can't break promises, Sweaty old man. We have been together since we were kids. We are both best friends and lovers to end all iteration." I smiled with affection. It was clear that I meant everything I said. "We both know that you like taking command of us like this. Am I not your squirting toy, Sweaty old man?" I asked with a cute pout. "We should meet up in front of the shop after classes. I'll see you there. See you, Sweaty old man!" ``` 感觉还是很值得一试的~ 如果觉得好用,欢迎支持我一杯 [咖啡](https://ko-fi.com/mikolisa) :)
{"language": ["en"], "license": "mit", "tags": ["roleplay", "not-for-all-audiences"], "datasets": ["lemonilia/LimaRP", "PygmalionAI/PIPPA"], "pipeline_tag": "text-generation"}
Supradeku/Deepsex34BOriginalFp16
null
[ "roleplay", "not-for-all-audiences", "text-generation", "en", "dataset:lemonilia/LimaRP", "dataset:PygmalionAI/PIPPA", "license:mit", "region:us" ]
null
2024-04-16T10:17:54+00:00
[]
[ "en" ]
TAGS #roleplay #not-for-all-audiences #text-generation #en #dataset-lemonilia/LimaRP #dataset-PygmalionAI/PIPPA #license-mit #region-us
Deepsex-34b tks TheBloke making quantized version! gguf:URL exl2:URL awq:URL 6b base version:URL 6b chat version:URL In fact, I plan to make a model of the "Seven Deadly Sins" series. Of course, the pre-training data used in these models are all human-produced data. I think the big model is like a mirror, reflecting the human itself. Examine yourself may become a crucial step in realizing agi. So, It is 'lust'. The 6b corresponding to the model is being produced, and the corresponding llama version is also being produced. The classification data of the other six deadly sins is being collected. Welcome to provide inspiration! Here are the steps to make this model: 1. I first collected a total collection of about 4GB of various light novels, and used BERT to perform two rounds of similarity deduplication on the novels with similar plots in the data set. In addition, a portion of nsfw novels are mixed in to improve the NSFW capabilities of the model. 2. Then use the YI-34B-base as the base of the model, use the setting of r=64 alpha=128 and use qlora to fine-tune 3 epochs for continuous pre-training. 3. Prepare the limarp+pippa data set, clean it into alpaca format, and use goliath-120b, which is good at role-playing, to score each question and answer pair, and filter out the high-quality ones. 30k data. 4. Use the data in 3 for sft on the base model obtained in 2, 6 epochs, r=16 alpha=32 for fine-tuning. *Format* alpaca *Effect*: Proficient in role-playing skills, while being highly accepted on NSFW, pure love words will appear from time to time. like: It feels like it’s still worth a try~ Support me here :) Deepsex-34b 事实上,我有计划制作一个“七宗罪”系列模型。当然,这些模型所使用的预训练数据全部为人类产出的数据。我认为大模型就像一面镜子,倒映的是人类自身。审视自己或许会成为实现agi中至关重要的一步。 该模型对应的6b正在制作,相应的llama版本的也在制作。其他六宗罪的分类数据正在收集中,欢迎大家提供灵感! *步骤* 1. 我先收集了各种轻小说大约4GB的总集,通过bert对该数据集中剧情比较相似的小说进行了两轮相似度去重。另外混入了一部分nsfw小说以提高该模型的NSFW能力。 2. 然后将该模型以YI-34B-base为基座,使用r=64 alpha=128 的设置使用qlora微调了3个epochs来进行持续预训练。 3. 准备limarp+pippa数据集,统一清洗为alpaca格式,并且使用比较擅长角色扮演的goliath-120b对每个问答对进行打分,筛选出其中质量高的大约30k数据。 4. 对2中得到的base模型使用3中的数据进行sft,6个epochs,r=16 alpha=32进行微调。 *格式* alpaca *效果* 熟练的角色扮演技能,在NSFW上有很高接受度的同时,会时不时的出现纯爱的话语。如: 感觉还是很值得一试的~ 如果觉得好用,欢迎支持我一杯 咖啡 :)
[]
[ "TAGS\n#roleplay #not-for-all-audiences #text-generation #en #dataset-lemonilia/LimaRP #dataset-PygmalionAI/PIPPA #license-mit #region-us \n" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # icellama_domar_pretuned_v2 This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5311 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.6301 | 0.05 | 500 | 1.6427 | | 1.6438 | 0.1 | 1000 | 1.6136 | | 1.612 | 0.14 | 1500 | 1.5963 | | 1.5958 | 0.19 | 2000 | 1.5836 | | 1.5981 | 0.24 | 2500 | 1.5741 | | 1.61 | 0.29 | 3000 | 1.5666 | | 1.5862 | 0.34 | 3500 | 1.5608 | | 1.529 | 0.38 | 4000 | 1.5563 | | 1.5807 | 0.43 | 4500 | 1.5523 | | 1.5732 | 0.48 | 5000 | 1.5491 | | 1.553 | 0.53 | 5500 | 1.5464 | | 1.5993 | 0.57 | 6000 | 1.5444 | | 1.5427 | 0.62 | 6500 | 1.5424 | | 1.5705 | 0.67 | 7000 | 1.5408 | | 1.546 | 0.72 | 7500 | 1.5395 | | 1.5425 | 0.77 | 8000 | 1.5383 | | 1.5793 | 0.81 | 8500 | 1.5374 | | 1.5514 | 0.86 | 9000 | 1.5366 | | 1.5709 | 0.91 | 9500 | 1.5359 | | 1.5638 | 0.96 | 10000 | 1.5352 | | 1.5792 | 1.01 | 10500 | 1.5347 | | 1.5027 | 1.05 | 11000 | 1.5342 | | 1.5345 | 1.1 | 11500 | 1.5339 | | 1.5431 | 1.15 | 12000 | 1.5334 | | 1.548 | 1.2 | 12500 | 1.5332 | | 1.5513 | 1.25 | 13000 | 1.5328 | | 1.5717 | 1.29 | 13500 | 1.5326 | | 1.539 | 1.34 | 14000 | 1.5324 | | 1.5256 | 1.39 | 14500 | 1.5323 | | 1.4934 | 1.44 | 15000 | 1.5320 | | 1.5648 | 1.48 | 15500 | 1.5320 | | 1.5342 | 1.53 | 16000 | 1.5319 | | 1.5125 | 1.58 | 16500 | 1.5317 | | 1.4974 | 1.63 | 17000 | 1.5316 | | 1.5086 | 1.68 | 17500 | 1.5316 | | 1.5387 | 1.72 | 18000 | 1.5315 | | 1.5126 | 1.77 | 18500 | 1.5314 | | 1.565 | 1.82 | 19000 | 1.5314 | | 1.5888 | 1.87 | 19500 | 1.5313 | | 1.5585 | 1.92 | 20000 | 1.5313 | | 1.5599 | 1.96 | 20500 | 1.5313 | | 1.5476 | 2.01 | 21000 | 1.5312 | | 1.5661 | 2.06 | 21500 | 1.5312 | | 1.5514 | 2.11 | 22000 | 1.5312 | | 1.531 | 2.16 | 22500 | 1.5312 | | 1.5705 | 2.2 | 23000 | 1.5311 | | 1.5576 | 2.25 | 23500 | 1.5311 | | 1.5601 | 2.3 | 24000 | 1.5312 | | 1.5438 | 2.35 | 24500 | 1.5312 | | 1.5758 | 2.39 | 25000 | 1.5311 | | 1.5704 | 2.44 | 25500 | 1.5311 | | 1.5225 | 2.49 | 26000 | 1.5311 | | 1.556 | 2.54 | 26500 | 1.5311 | | 1.5701 | 2.59 | 27000 | 1.5311 | | 1.5135 | 2.63 | 27500 | 1.5311 | | 1.5029 | 2.68 | 28000 | 1.5311 | | 1.5471 | 2.73 | 28500 | 1.5311 | | 1.5083 | 2.78 | 29000 | 1.5311 | | 1.5557 | 2.83 | 29500 | 1.5311 | | 1.5394 | 2.87 | 30000 | 1.5311 | | 1.5449 | 2.92 | 30500 | 1.5311 | | 1.5467 | 2.97 | 31000 | 1.5311 | ### Framework versions - PEFT 0.8.2 - Transformers 4.38.1 - Pytorch 2.2.0+cu118 - Datasets 2.17.1 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "icellama_domar_pretuned_v2", "results": []}]}
thorirhrafn/icellama_domar_pretuned_v2
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "license:llama2", "region:us" ]
null
2024-04-16T10:18:31+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-meta-llama/Llama-2-7b-hf #license-llama2 #region-us
icellama\_domar\_pretuned\_v2 ============================= This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.5311 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * PEFT 0.8.2 * Transformers 4.38.1 * Pytorch 2.2.0+cu118 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* PEFT 0.8.2\n* Transformers 4.38.1\n* Pytorch 2.2.0+cu118\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-meta-llama/Llama-2-7b-hf #license-llama2 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* PEFT 0.8.2\n* Transformers 4.38.1\n* Pytorch 2.2.0+cu118\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_1-seqsight_16384_512_22M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_1](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_1) dataset. It achieves the following results on the evaluation set: - Loss: 0.4917 - F1 Score: 0.7596 - Accuracy: 0.76 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6426 | 13.33 | 200 | 0.5996 | 0.6717 | 0.673 | | 0.5766 | 26.67 | 400 | 0.5931 | 0.6868 | 0.687 | | 0.5507 | 40.0 | 600 | 0.5815 | 0.6930 | 0.693 | | 0.5294 | 53.33 | 800 | 0.5895 | 0.6845 | 0.686 | | 0.5144 | 66.67 | 1000 | 0.5973 | 0.6848 | 0.685 | | 0.5048 | 80.0 | 1200 | 0.5982 | 0.6920 | 0.692 | | 0.4975 | 93.33 | 1400 | 0.6065 | 0.6918 | 0.692 | | 0.4922 | 106.67 | 1600 | 0.6160 | 0.6810 | 0.681 | | 0.4866 | 120.0 | 1800 | 0.6054 | 0.6865 | 0.687 | | 0.4817 | 133.33 | 2000 | 0.6208 | 0.6840 | 0.686 | | 0.476 | 146.67 | 2200 | 0.6162 | 0.6878 | 0.688 | | 0.4707 | 160.0 | 2400 | 0.6138 | 0.6890 | 0.689 | | 0.4664 | 173.33 | 2600 | 0.6302 | 0.6767 | 0.677 | | 0.4583 | 186.67 | 2800 | 0.6295 | 0.6686 | 0.669 | | 0.4542 | 200.0 | 3000 | 0.6217 | 0.6729 | 0.673 | | 0.4485 | 213.33 | 3200 | 0.6222 | 0.6769 | 0.677 | | 0.4415 | 226.67 | 3400 | 0.6420 | 0.6719 | 0.672 | | 0.4368 | 240.0 | 3600 | 0.6708 | 0.6800 | 0.68 | | 0.4316 | 253.33 | 3800 | 0.6493 | 0.6785 | 0.679 | | 0.4253 | 266.67 | 4000 | 0.6593 | 0.6867 | 0.687 | | 0.4186 | 280.0 | 4200 | 0.6592 | 0.6835 | 0.684 | | 0.4136 | 293.33 | 4400 | 0.6836 | 0.6802 | 0.681 | | 0.4086 | 306.67 | 4600 | 0.6775 | 0.676 | 0.676 | | 0.403 | 320.0 | 4800 | 0.6759 | 0.6710 | 0.671 | | 0.3975 | 333.33 | 5000 | 0.6887 | 0.6739 | 0.674 | | 0.3947 | 346.67 | 5200 | 0.6631 | 0.6667 | 0.667 | | 0.3894 | 360.0 | 5400 | 0.7048 | 0.6542 | 0.655 | | 0.3845 | 373.33 | 5600 | 0.6915 | 0.6680 | 0.668 | | 0.3807 | 386.67 | 5800 | 0.7002 | 0.6677 | 0.668 | | 0.376 | 400.0 | 6000 | 0.7042 | 0.6630 | 0.663 | | 0.372 | 413.33 | 6200 | 0.7078 | 0.6678 | 0.668 | | 0.3687 | 426.67 | 6400 | 0.7077 | 0.6698 | 0.67 | | 0.3641 | 440.0 | 6600 | 0.7167 | 0.6649 | 0.665 | | 0.3609 | 453.33 | 6800 | 0.7060 | 0.6680 | 0.668 | | 0.3562 | 466.67 | 7000 | 0.7300 | 0.6678 | 0.668 | | 0.3537 | 480.0 | 7200 | 0.7242 | 0.6686 | 0.669 | | 0.3511 | 493.33 | 7400 | 0.7320 | 0.6719 | 0.672 | | 0.3493 | 506.67 | 7600 | 0.7319 | 0.6738 | 0.674 | | 0.3473 | 520.0 | 7800 | 0.7302 | 0.6768 | 0.677 | | 0.3429 | 533.33 | 8000 | 0.7378 | 0.6728 | 0.673 | | 0.3405 | 546.67 | 8200 | 0.7341 | 0.6678 | 0.668 | | 0.3409 | 560.0 | 8400 | 0.7317 | 0.6666 | 0.667 | | 0.3375 | 573.33 | 8600 | 0.7361 | 0.6659 | 0.666 | | 0.3365 | 586.67 | 8800 | 0.7315 | 0.6708 | 0.671 | | 0.3348 | 600.0 | 9000 | 0.7400 | 0.6649 | 0.665 | | 0.3329 | 613.33 | 9200 | 0.7459 | 0.6660 | 0.666 | | 0.3328 | 626.67 | 9400 | 0.7486 | 0.6649 | 0.665 | | 0.3323 | 640.0 | 9600 | 0.7435 | 0.6640 | 0.664 | | 0.3296 | 653.33 | 9800 | 0.7462 | 0.6659 | 0.666 | | 0.33 | 666.67 | 10000 | 0.7460 | 0.6679 | 0.668 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_1-seqsight_16384_512_22M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_1-seqsight_16384_512_22M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-16T10:18:44+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_1-seqsight\_16384\_512\_22M-L32\_all ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.4917 * F1 Score: 0.7596 * Accuracy: 0.76 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Vissa15AI/mistral_b_finance_finetuned_test
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:20:17+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
Oxapam বড়ি রিভিউ কি? Oxapam ক্যাপসুল পিলস একটি নেতৃস্থানীয় প্রাকৃতিক সম্পূরক হিসাবে দাঁড়িয়েছে যা তাদের পুনরুজ্জীবিত বৈশিষ্ট্যের জন্য পরিচিত ভেষজ নির্যাস, ভিটামিন এবং খনিজগুলির মিশ্রণ থেকে তৈরি। জীবনীশক্তি, সহনশীলতা এবং প্রাণশক্তিকে সমর্থন করার জন্য ডিজাইন করা হয়েছে, Oxapam ট্যাবলেট মূল্য শক্তির মাত্রা বাড়ানো এবং সামগ্রিক স্বাস্থ্যের উন্নতির জন্য একটি প্রাকৃতিক এবং টেকসই পদ্ধতির প্রস্তাব করে। সরকারী ওয়েবসাইট:<a href="https://www.nutritionsee.com/Oxapsbangs">www.Oxapam.com</a> <p><a href="https://www.nutritionsee.com/Oxapsbangs"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/04/Oxapam-Bangladesh.png" alt="enter image description here"> </a></p> <a href="https://www.nutritionsee.com/Oxapsbangs">এখন কেন!! আরও তথ্যের জন্য নীচের লিঙ্কে ক্লিক করুন এবং এখনই 50% ছাড় পান... তাড়াতাড়ি করুন</a> সরকারী ওয়েবসাইট:<a href="https://www.nutritionsee.com/Oxapsbangs">www.Oxapam.com</a>
{"license": "apache-2.0"}
OxapamBangladesh/OxapamBangladesh
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-16T10:20:19+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
Oxapam বড়ি রিভিউ কি? Oxapam ক্যাপসুল পিলস একটি নেতৃস্থানীয় প্রাকৃতিক সম্পূরক হিসাবে দাঁড়িয়েছে যা তাদের পুনরুজ্জীবিত বৈশিষ্ট্যের জন্য পরিচিত ভেষজ নির্যাস, ভিটামিন এবং খনিজগুলির মিশ্রণ থেকে তৈরি। জীবনীশক্তি, সহনশীলতা এবং প্রাণশক্তিকে সমর্থন করার জন্য ডিজাইন করা হয়েছে, Oxapam ট্যাবলেট মূল্য শক্তির মাত্রা বাড়ানো এবং সামগ্রিক স্বাস্থ্যের উন্নতির জন্য একটি প্রাকৃতিক এবং টেকসই পদ্ধতির প্রস্তাব করে। সরকারী ওয়েবসাইট:<a href="URL <p><a href="URL <img src="URL alt="enter image description here"> </a></p> <a href="URL>এখন কেন!! আরও তথ্যের জন্য নীচের লিঙ্কে ক্লিক করুন এবং এখনই 50% ছাড় পান... তাড়াতাড়ি করুন</a> সরকারী ওয়েবসাইট:<a href="URL
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_4-seqsight_16384_512_22M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_4](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_4) dataset. It achieves the following results on the evaluation set: - Loss: 0.8841 - F1 Score: 0.7136 - Accuracy: 0.715 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6304 | 20.0 | 200 | 0.6243 | 0.6618 | 0.662 | | 0.533 | 40.0 | 400 | 0.6037 | 0.6989 | 0.699 | | 0.4852 | 60.0 | 600 | 0.5962 | 0.7130 | 0.716 | | 0.4454 | 80.0 | 800 | 0.5970 | 0.7104 | 0.713 | | 0.4193 | 100.0 | 1000 | 0.5969 | 0.7142 | 0.715 | | 0.4028 | 120.0 | 1200 | 0.6048 | 0.7272 | 0.73 | | 0.3913 | 140.0 | 1400 | 0.5978 | 0.7323 | 0.733 | | 0.3779 | 160.0 | 1600 | 0.6037 | 0.7240 | 0.724 | | 0.3676 | 180.0 | 1800 | 0.6335 | 0.7191 | 0.722 | | 0.358 | 200.0 | 2000 | 0.6235 | 0.7218 | 0.725 | | 0.3471 | 220.0 | 2200 | 0.6177 | 0.7291 | 0.73 | | 0.3357 | 240.0 | 2400 | 0.6010 | 0.7363 | 0.739 | | 0.323 | 260.0 | 2600 | 0.6170 | 0.7425 | 0.743 | | 0.3125 | 280.0 | 2800 | 0.6359 | 0.7424 | 0.745 | | 0.3017 | 300.0 | 3000 | 0.6176 | 0.7390 | 0.741 | | 0.2895 | 320.0 | 3200 | 0.6426 | 0.7447 | 0.747 | | 0.2803 | 340.0 | 3400 | 0.6563 | 0.7530 | 0.754 | | 0.2718 | 360.0 | 3600 | 0.6319 | 0.7508 | 0.754 | | 0.2642 | 380.0 | 3800 | 0.6343 | 0.7671 | 0.768 | | 0.2535 | 400.0 | 4000 | 0.6492 | 0.7668 | 0.768 | | 0.2464 | 420.0 | 4200 | 0.6175 | 0.7698 | 0.772 | | 0.2382 | 440.0 | 4400 | 0.6834 | 0.7515 | 0.756 | | 0.23 | 460.0 | 4600 | 0.6649 | 0.7698 | 0.772 | | 0.2265 | 480.0 | 4800 | 0.6774 | 0.7624 | 0.765 | | 0.2171 | 500.0 | 5000 | 0.6516 | 0.7693 | 0.772 | | 0.2111 | 520.0 | 5200 | 0.6484 | 0.7596 | 0.762 | | 0.2067 | 540.0 | 5400 | 0.6413 | 0.7842 | 0.785 | | 0.2027 | 560.0 | 5600 | 0.6443 | 0.7764 | 0.778 | | 0.1936 | 580.0 | 5800 | 0.6651 | 0.7748 | 0.777 | | 0.1923 | 600.0 | 6000 | 0.6899 | 0.7664 | 0.769 | | 0.1858 | 620.0 | 6200 | 0.6685 | 0.7749 | 0.777 | | 0.1823 | 640.0 | 6400 | 0.6567 | 0.7807 | 0.782 | | 0.1778 | 660.0 | 6600 | 0.6779 | 0.7762 | 0.778 | | 0.1757 | 680.0 | 6800 | 0.6929 | 0.7710 | 0.773 | | 0.1719 | 700.0 | 7000 | 0.6893 | 0.7788 | 0.78 | | 0.1684 | 720.0 | 7200 | 0.6582 | 0.7872 | 0.788 | | 0.1648 | 740.0 | 7400 | 0.6689 | 0.7775 | 0.779 | | 0.1632 | 760.0 | 7600 | 0.6848 | 0.7733 | 0.775 | | 0.1593 | 780.0 | 7800 | 0.7022 | 0.7652 | 0.767 | | 0.158 | 800.0 | 8000 | 0.6726 | 0.7788 | 0.78 | | 0.1558 | 820.0 | 8200 | 0.7225 | 0.7702 | 0.772 | | 0.1531 | 840.0 | 8400 | 0.6985 | 0.7747 | 0.776 | | 0.1513 | 860.0 | 8600 | 0.6891 | 0.7798 | 0.781 | | 0.1504 | 880.0 | 8800 | 0.7235 | 0.7797 | 0.781 | | 0.1491 | 900.0 | 9000 | 0.7042 | 0.7756 | 0.777 | | 0.1479 | 920.0 | 9200 | 0.7000 | 0.7756 | 0.777 | | 0.1469 | 940.0 | 9400 | 0.7140 | 0.7736 | 0.775 | | 0.1462 | 960.0 | 9600 | 0.7094 | 0.7725 | 0.774 | | 0.1455 | 980.0 | 9800 | 0.7112 | 0.7726 | 0.774 | | 0.1455 | 1000.0 | 10000 | 0.7086 | 0.7726 | 0.774 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_4-seqsight_16384_512_22M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_4-seqsight_16384_512_22M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-16T10:24:04+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_4-seqsight\_16384\_512\_22M-L32\_all ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_4 dataset. It achieves the following results on the evaluation set: * Loss: 0.8841 * F1 Score: 0.7136 * Accuracy: 0.715 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="moebachar/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
moebachar/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-16T10:24:25+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # music-genre-classifer-20-finetuned-gtzan This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the GTZAN dataset. It achieves the following results on the evaluation set: - Loss: 1.1602 - Accuracy: 0.81 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 2.0297 | 1.0 | 113 | 0.46 | 2.0056 | | 1.6252 | 2.0 | 226 | 0.61 | 1.5821 | | 1.4001 | 3.0 | 339 | 0.62 | 1.3967 | | 1.0201 | 4.0 | 452 | 0.77 | 1.1288 | | 1.0074 | 5.0 | 565 | 0.69 | 1.0933 | | 0.8466 | 6.0 | 678 | 0.76 | 0.9162 | | 0.6966 | 7.0 | 791 | 0.79 | 0.9122 | | 0.5324 | 8.0 | 904 | 0.82 | 0.7715 | | 0.6692 | 9.0 | 1017 | 1.0549 | 0.71 | | 0.7181 | 10.0 | 1130 | 0.8758 | 0.76 | | 0.5585 | 11.0 | 1243 | 1.0753 | 0.7 | | 0.4479 | 12.0 | 1356 | 1.1517 | 0.7 | | 0.3145 | 13.0 | 1469 | 0.8475 | 0.79 | | 0.8197 | 14.0 | 1582 | 0.8852 | 0.78 | | 0.4665 | 15.0 | 1695 | 1.0134 | 0.77 | | 0.2371 | 16.0 | 1808 | 1.0250 | 0.75 | | 0.3823 | 17.0 | 1921 | 0.9516 | 0.81 | | 0.5478 | 18.0 | 2034 | 1.2008 | 0.77 | | 0.3165 | 19.0 | 2147 | 1.0737 | 0.8 | | 0.1403 | 20.0 | 2260 | 0.9801 | 0.83 | | 0.2754 | 21.0 | 2373 | 1.0137 | 0.82 | | 0.2649 | 22.0 | 2486 | 1.2249 | 0.77 | | 0.0686 | 23.0 | 2599 | 1.3234 | 0.77 | | 0.0073 | 24.0 | 2712 | 1.2360 | 0.8 | | 0.0068 | 25.0 | 2825 | 1.1338 | 0.81 | | 0.0058 | 26.0 | 2938 | 1.2976 | 0.79 | | 0.0054 | 27.0 | 3051 | 1.1782 | 0.83 | | 0.0047 | 28.0 | 3164 | 1.0677 | 0.84 | | 0.0045 | 29.0 | 3277 | 1.1128 | 0.83 | | 0.0036 | 30.0 | 3390 | 1.1602 | 0.81 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["marsyas/gtzan"], "metrics": ["accuracy"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "music-genre-classifer-20-finetuned-gtzan", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "GTZAN", "type": "marsyas/gtzan", "config": "all", "split": "train", "args": "all"}, "metrics": [{"type": "accuracy", "value": 0.81, "name": "Accuracy"}]}]}]}
vadhri/wav2vec2-base-finetuned-gtzan
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:marsyas/gtzan", "base_model:facebook/wav2vec2-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:25:21+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #dataset-marsyas/gtzan #base_model-facebook/wav2vec2-base #license-apache-2.0 #model-index #endpoints_compatible #region-us
music-genre-classifer-20-finetuned-gtzan ======================================== This model is a fine-tuned version of facebook/wav2vec2-base on the GTZAN dataset. It achieves the following results on the evaluation set: * Loss: 1.1602 * Accuracy: 0.81 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #dataset-marsyas/gtzan #base_model-facebook/wav2vec2-base #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
wendy41/polyglot-12.8b_2step-causal-lm
null
[ "transformers", "safetensors", "gpt_neox", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T10:25:52+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gpt_neox #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "-5.46 +/- 107.66", "name": "mean_reward", "verified": false}]}]}]}
Yankovich/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-16T10:26:25+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo facebook/opt-1.3b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/facebook-opt-1.3b-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("facebook/opt-1.3b") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-1.3b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/facebook-opt-1.3b-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "opt", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-16T10:26:47+00:00
[]
[]
TAGS #transformers #safetensors #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo facebook/opt-1.3b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-1.3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/opt-1.3b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-1.3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/opt-1.3b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-1.3b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
yuhuixu/dpo_mistralai-Mistral-7B-Instruct-v0.2-experts
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T10:27:27+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PolizzeDonut-Resol964x1350-5Epochs This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-Resol964x1350-5Epochs", "results": []}]}
tedad09/PolizzeDonut-Resol964x1350-5Epochs
null
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:28:19+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
# PolizzeDonut-Resol964x1350-5Epochs This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# PolizzeDonut-Resol964x1350-5Epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n", "# PolizzeDonut-Resol964x1350-5Epochs\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_instruct_generation_own_data This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 0.4734 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.6476 | 2.5 | 20 | 0.4816 | | 0.3414 | 5.0 | 40 | 0.3842 | | 0.2565 | 7.5 | 60 | 0.3931 | | 0.1973 | 10.0 | 80 | 0.4198 | | 0.1245 | 12.5 | 100 | 0.4734 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.1", "model-index": [{"name": "mistral_instruct_generation_own_data", "results": []}]}
abdullahfurquan/mistral_instruct_generation_own_data
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-16T10:29:44+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.1 #license-apache-2.0 #region-us
mistral\_instruct\_generation\_own\_data ======================================== This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.1 on the generator dataset. It achieves the following results on the evaluation set: * Loss: 0.4734 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: constant * lr\_scheduler\_warmup\_steps: 0.03 * training\_steps: 100 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 100", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.1 #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 100", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "dpo"]}
kai-oh/mistral-7b-dpo-v8-hf
null
[ "transformers", "safetensors", "mistral", "text-generation", "trl", "dpo", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T10:29:50+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #trl #dpo #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #trl #dpo #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
Binary causal sentence classification: * LABEL_0 = Non-causal * LABEL_1 = Causal See the project repository here: https://github.com/rasoulnorouzi/cessc/tree/main
{"language": "en", "license": "gpl-3.0", "widget": [{"text": "In the beginning, Sonca seemed to have intensified rapidly since its formation , however, soon the storm weakened back to a minimal tropical storm because of dry air entering the LLCC that caused it to elongate and weaken.", "example_title": "Causal Example 1"}, {"text": "Our findings thus far show that the sanction reduced the number of chips that participants allocated to themselves and that it only increased the number of chips allocated to the yellow pool when there were two options.", "example_title": "Causal Example 2"}, {"text": "In addition, several vent gas scrubbers had been out of service as well as the steam boiler, intended to clean the pipes.", "example_title": "Non-causal Example 1"}, {"text": "First, we can assess the correlation between beliefs and contributions, which we expect to differ between types of players and which helps us to check on the player type as elicited in the P-experiment.", "example_title": "Non-causal Example 2"}]}
rasoultilburg/uce_scibert
null
[ "transformers", "safetensors", "bert", "text-classification", "en", "license:gpl-3.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:32:15+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #bert #text-classification #en #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us
Binary causal sentence classification: * LABEL_0 = Non-causal * LABEL_1 = Causal See the project repository here: URL
[]
[ "TAGS\n#transformers #safetensors #bert #text-classification #en #license-gpl-3.0 #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo facebook/opt-350m installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/facebook-opt-350m-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-350m before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/facebook-opt-350m-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "opt", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-16T10:34:47+00:00
[]
[]
TAGS #transformers #safetensors #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo facebook/opt-350m installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-350m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/opt-350m installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-350m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/opt-350m installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-350m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lenate_model_10 This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1105 - Accuracy: 0.9640 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 28 | 0.7205 | 0.9189 | | No log | 2.0 | 56 | 0.2958 | 0.9550 | | No log | 3.0 | 84 | 0.1696 | 0.9550 | | No log | 4.0 | 112 | 0.1309 | 0.9550 | | No log | 5.0 | 140 | 0.1216 | 0.9550 | | No log | 6.0 | 168 | 0.1105 | 0.9640 | | No log | 7.0 | 196 | 0.1173 | 0.9550 | | No log | 8.0 | 224 | 0.1159 | 0.9550 | | No log | 9.0 | 252 | 0.1164 | 0.9550 | | No log | 10.0 | 280 | 0.1181 | 0.9550 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "lenate_model_10", "results": []}]}
lenate/lenate_model_10
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:35:09+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
lenate\_model\_10 ================= This model is a fine-tuned version of distilbert/distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1105 * Accuracy: 0.9640 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65f22e4076fedc4fd11e978f/MoTedec_ZL8GM2MmGyAPs.png) # T3Q-LLM-solar10.8-sft-v1.0 ## This model is a version of yanolja/EEVE-Korean-Instruct-10.8B-v1.0 that has been fine-tuned with SFT. ## Model Developers Chihoon Lee(chihoonlee10), T3Q hf (pretrained=T3Q-LLM/T3Q-LLM-solar10.8-sft-v1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.9288|± |0.0069| | | |macro_f1|0.9286|± |0.0069| |kobest_copa | 0|acc |0.7440|± |0.0138| | | |macro_f1|0.7434|± |0.0138| |kobest_hellaswag| 0|acc |0.4880|± |0.0224| | | |acc_norm|0.5600|± |0.0222| | | |macro_f1|0.4854|± |0.0224| |kobest_sentineg | 0|acc |0.8589|± |0.0175| | | |macro_f1|0.8589|± |0.0175| hf (pretrained=yanolja/EEVE-Korean-Instruct-10.8B-v1.0), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |----------------|------:|--------|-----:|---|-----:| |kobest_boolq | 0|acc |0.9188|± |0.0073| | | |macro_f1|0.9185|± |0.0073| |kobest_copa | 0|acc |0.7520|± |0.0137| | | |macro_f1|0.7516|± |0.0136| |kobest_hellaswag| 0|acc |0.4840|± |0.0224| | | |acc_norm|0.5580|± |0.0222| | | |macro_f1|0.4804|± |0.0223| |kobest_sentineg | 0|acc |0.8514|± |0.0179| | | |macro_f1|0.8508|± |0.0180|
{"license": "apache-2.0", "library_name": "transformers", "datasets": ["davidkim205/ko_common_gen"], "pipeline_tag": "text-generation", "base model": ["yanolja/EEVE-Korean-Instruct-10.8B-v1.0"]}
T3Q-LLM/T3Q-LLM-solar10.8-sft-v1.0
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:davidkim205/ko_common_gen", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T10:35:13+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #dataset-davidkim205/ko_common_gen #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
!image/png T3Q-LLM-solar10.8-sft-v1.0 ========================== This model is a version of yanolja/EEVE-Korean-Instruct-10.8B-v1.0 that has been fine-tuned with SFT. ----------------------------------------------------------------------------------------------------- Model Developers Chihoon Lee(chihoonlee10), T3Q ----------------------------------------------- hf (pretrained=T3Q-LLM/T3Q-LLM-solar10.8-sft-v1.0), limit: None, provide\_description: False, num\_fewshot: 0, batch\_size: None hf (pretrained=yanolja/EEVE-Korean-Instruct-10.8B-v1.0), limit: None, provide\_description: False, num\_fewshot: 0, batch\_size: None
[]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #dataset-davidkim205/ko_common_gen #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
nuebaek/komt-mistral-mss
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T10:38:30+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
multiple-choice
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-mcq This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0314 - Accuracy: 0.7284 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 313 | 0.7425 | 0.7144 | | 0.7946 | 2.0 | 626 | 0.8059 | 0.7196 | | 0.7946 | 3.0 | 939 | 1.0314 | 0.7284 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "bert-base-uncased-mcq", "results": []}]}
sahithya20/bert-base-uncased-mcq
null
[ "transformers", "tensorboard", "safetensors", "bert", "multiple-choice", "generated_from_trainer", "base_model:bert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:40:26+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bert #multiple-choice #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
bert-base-uncased-mcq ===================== This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.0314 * Accuracy: 0.7284 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #multiple-choice #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
galbitang/polyglot
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:42:15+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ruBert-base-sberquad-0.02-len_3-filtered-negative-v2 This model is a fine-tuned version of [ai-forever/ruBert-base](https://huggingface.co/ai-forever/ruBert-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 7000 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "ai-forever/ruBert-base", "model-index": [{"name": "ruBert-base-sberquad-0.02-len_3-filtered-negative-v2", "results": []}]}
Shalazary/ruBert-base-sberquad-0.02-len_3-filtered-negative-v2
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:ai-forever/ruBert-base", "license:apache-2.0", "region:us" ]
null
2024-04-16T10:43:15+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us
# ruBert-base-sberquad-0.02-len_3-filtered-negative-v2 This model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 7000 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# ruBert-base-sberquad-0.02-len_3-filtered-negative-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-ai-forever/ruBert-base #license-apache-2.0 #region-us \n", "# ruBert-base-sberquad-0.02-len_3-filtered-negative-v2\n\nThis model is a fine-tuned version of ai-forever/ruBert-base on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- training_steps: 7000", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="moebachar/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
moebachar/Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-16T10:43:37+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_3-seqsight_16384_512_22M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_3](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_3) dataset. It achieves the following results on the evaluation set: - Loss: 0.7363 - F1 Score: 0.6227 - Accuracy: 0.624 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6755 | 14.29 | 200 | 0.6535 | 0.6187 | 0.622 | | 0.6325 | 28.57 | 400 | 0.6572 | 0.6125 | 0.62 | | 0.6094 | 42.86 | 600 | 0.6587 | 0.6237 | 0.628 | | 0.5878 | 57.14 | 800 | 0.6755 | 0.6140 | 0.614 | | 0.5715 | 71.43 | 1000 | 0.6632 | 0.6214 | 0.626 | | 0.5602 | 85.71 | 1200 | 0.6909 | 0.6201 | 0.62 | | 0.552 | 100.0 | 1400 | 0.6906 | 0.6100 | 0.61 | | 0.5468 | 114.29 | 1600 | 0.7029 | 0.6151 | 0.617 | | 0.5397 | 128.57 | 1800 | 0.6855 | 0.6230 | 0.623 | | 0.5322 | 142.86 | 2000 | 0.7023 | 0.6220 | 0.622 | | 0.5265 | 157.14 | 2200 | 0.6798 | 0.6087 | 0.609 | | 0.5197 | 171.43 | 2400 | 0.7034 | 0.6231 | 0.623 | | 0.5127 | 185.71 | 2600 | 0.7024 | 0.6229 | 0.623 | | 0.5043 | 200.0 | 2800 | 0.6893 | 0.6288 | 0.63 | | 0.4983 | 214.29 | 3000 | 0.7151 | 0.6239 | 0.624 | | 0.4893 | 228.57 | 3200 | 0.7292 | 0.6221 | 0.622 | | 0.4841 | 242.86 | 3400 | 0.7279 | 0.6143 | 0.616 | | 0.4792 | 257.14 | 3600 | 0.7119 | 0.6311 | 0.631 | | 0.4724 | 271.43 | 3800 | 0.7331 | 0.6380 | 0.638 | | 0.465 | 285.71 | 4000 | 0.7117 | 0.6191 | 0.619 | | 0.4585 | 300.0 | 4200 | 0.7220 | 0.6320 | 0.632 | | 0.452 | 314.29 | 4400 | 0.7262 | 0.6211 | 0.621 | | 0.4464 | 328.57 | 4600 | 0.7407 | 0.6299 | 0.63 | | 0.4398 | 342.86 | 4800 | 0.7623 | 0.6114 | 0.612 | | 0.4338 | 357.14 | 5000 | 0.7804 | 0.6281 | 0.628 | | 0.4296 | 371.43 | 5200 | 0.7546 | 0.6267 | 0.627 | | 0.4257 | 385.71 | 5400 | 0.7560 | 0.6191 | 0.619 | | 0.4186 | 400.0 | 5600 | 0.7775 | 0.6119 | 0.612 | | 0.4153 | 414.29 | 5800 | 0.7807 | 0.6241 | 0.624 | | 0.4087 | 428.57 | 6000 | 0.7823 | 0.6171 | 0.617 | | 0.4052 | 442.86 | 6200 | 0.7764 | 0.6120 | 0.612 | | 0.4002 | 457.14 | 6400 | 0.7811 | 0.6278 | 0.628 | | 0.3966 | 471.43 | 6600 | 0.7828 | 0.6211 | 0.621 | | 0.3911 | 485.71 | 6800 | 0.7827 | 0.6201 | 0.62 | | 0.3895 | 500.0 | 7000 | 0.7975 | 0.6161 | 0.616 | | 0.385 | 514.29 | 7200 | 0.7905 | 0.6191 | 0.619 | | 0.3809 | 528.57 | 7400 | 0.8146 | 0.6100 | 0.61 | | 0.3776 | 542.86 | 7600 | 0.8069 | 0.6151 | 0.615 | | 0.3741 | 557.14 | 7800 | 0.8033 | 0.6180 | 0.618 | | 0.3733 | 571.43 | 8000 | 0.7970 | 0.6141 | 0.614 | | 0.37 | 585.71 | 8200 | 0.8140 | 0.6160 | 0.616 | | 0.368 | 600.0 | 8400 | 0.7957 | 0.6179 | 0.618 | | 0.3672 | 614.29 | 8600 | 0.8177 | 0.6179 | 0.618 | | 0.3634 | 628.57 | 8800 | 0.8210 | 0.6181 | 0.618 | | 0.3613 | 642.86 | 9000 | 0.8079 | 0.618 | 0.618 | | 0.3602 | 657.14 | 9200 | 0.8238 | 0.6090 | 0.609 | | 0.3587 | 671.43 | 9400 | 0.8130 | 0.6141 | 0.614 | | 0.3586 | 685.71 | 9600 | 0.8177 | 0.6081 | 0.608 | | 0.3583 | 700.0 | 9800 | 0.8181 | 0.6121 | 0.612 | | 0.3574 | 714.29 | 10000 | 0.8213 | 0.6111 | 0.611 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_3-seqsight_16384_512_22M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_3-seqsight_16384_512_22M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-16T10:44:54+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_3-seqsight\_16384\_512\_22M-L32\_all ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_3 dataset. It achieves the following results on the evaluation set: * Loss: 0.7363 * F1 Score: 0.6227 * Accuracy: 0.624 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo facebook/opt-2.7b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/facebook-opt-2.7b-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("facebook/opt-2.7b") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-2.7b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/facebook-opt-2.7b-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "opt", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-16T10:45:33+00:00
[]
[]
TAGS #transformers #safetensors #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo facebook/opt-2.7b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-2.7b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/opt-2.7b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-2.7b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/opt-2.7b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-2.7b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
sentence-similarity
sentence-transformers
# hsikchi/dump2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('hsikchi/dump2') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=hsikchi/dump2) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 7636 with parameters: ``` {'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 0, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
hsikchi/dump2
null
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:45:41+00:00
[]
[]
TAGS #sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #endpoints_compatible #region-us
# hsikchi/dump2 This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 7636 with parameters: Loss: 'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters: Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors
[ "# hsikchi/dump2\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 7636 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n", "# hsikchi/dump2\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 7636 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
text-generation
transformers
FP32 version of original BF16 mistral safetensors. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-v0.1") model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", torch_dtype=torch.float) model.save_pretrained("./Mistral-7B-v0.1-fp32") tokenizer.save_pretrained("./Mistral-7B-v0.1-fp32") ```
{"license": "apache-2.0"}
TeeZee/Mistral-7B-v0.1-fp32
null
[ "transformers", "safetensors", "mistral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T10:47:58+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
FP32 version of original BF16 mistral safetensors.
[]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_tf_2-seqsight_16384_512_22M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_22M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_22M) on the [mahdibaghbanzadeh/GUE_tf_2](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_tf_2) dataset. It achieves the following results on the evaluation set: - Loss: 0.8542 - F1 Score: 0.6699 - Accuracy: 0.671 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.646 | 20.0 | 200 | 0.6402 | 0.6294 | 0.631 | | 0.5744 | 40.0 | 400 | 0.6511 | 0.6360 | 0.636 | | 0.5388 | 60.0 | 600 | 0.6533 | 0.6404 | 0.641 | | 0.5089 | 80.0 | 800 | 0.6804 | 0.6467 | 0.647 | | 0.488 | 100.0 | 1000 | 0.6758 | 0.6407 | 0.641 | | 0.4761 | 120.0 | 1200 | 0.6719 | 0.6496 | 0.65 | | 0.467 | 140.0 | 1400 | 0.6766 | 0.6540 | 0.654 | | 0.4601 | 160.0 | 1600 | 0.6930 | 0.656 | 0.656 | | 0.4516 | 180.0 | 1800 | 0.6886 | 0.6490 | 0.649 | | 0.4467 | 200.0 | 2000 | 0.6898 | 0.6558 | 0.656 | | 0.4376 | 220.0 | 2200 | 0.7025 | 0.6568 | 0.657 | | 0.4317 | 240.0 | 2400 | 0.6995 | 0.6520 | 0.652 | | 0.4229 | 260.0 | 2600 | 0.7009 | 0.6445 | 0.645 | | 0.4155 | 280.0 | 2800 | 0.7749 | 0.6583 | 0.661 | | 0.4079 | 300.0 | 3000 | 0.7320 | 0.6405 | 0.641 | | 0.3958 | 320.0 | 3200 | 0.7593 | 0.6499 | 0.65 | | 0.3877 | 340.0 | 3400 | 0.7387 | 0.6449 | 0.645 | | 0.3784 | 360.0 | 3600 | 0.7633 | 0.6460 | 0.646 | | 0.3701 | 380.0 | 3800 | 0.7636 | 0.6386 | 0.639 | | 0.3621 | 400.0 | 4000 | 0.7765 | 0.6545 | 0.655 | | 0.3528 | 420.0 | 4200 | 0.7710 | 0.6568 | 0.657 | | 0.3452 | 440.0 | 4400 | 0.7748 | 0.6528 | 0.653 | | 0.3364 | 460.0 | 4600 | 0.7670 | 0.6447 | 0.645 | | 0.3272 | 480.0 | 4800 | 0.8017 | 0.6549 | 0.655 | | 0.3223 | 500.0 | 5000 | 0.8270 | 0.6589 | 0.659 | | 0.3131 | 520.0 | 5200 | 0.8279 | 0.6381 | 0.639 | | 0.3076 | 540.0 | 5400 | 0.8427 | 0.6500 | 0.65 | | 0.3002 | 560.0 | 5600 | 0.8701 | 0.6580 | 0.658 | | 0.2956 | 580.0 | 5800 | 0.8539 | 0.6475 | 0.648 | | 0.2889 | 600.0 | 6000 | 0.8964 | 0.6490 | 0.649 | | 0.2838 | 620.0 | 6200 | 0.8798 | 0.6528 | 0.653 | | 0.2753 | 640.0 | 6400 | 0.8861 | 0.6597 | 0.66 | | 0.2714 | 660.0 | 6600 | 0.9137 | 0.6570 | 0.657 | | 0.2674 | 680.0 | 6800 | 0.9033 | 0.6500 | 0.65 | | 0.2643 | 700.0 | 7000 | 0.9136 | 0.6510 | 0.651 | | 0.2583 | 720.0 | 7200 | 0.9299 | 0.6467 | 0.647 | | 0.2538 | 740.0 | 7400 | 0.9454 | 0.6509 | 0.651 | | 0.251 | 760.0 | 7600 | 0.9318 | 0.6490 | 0.649 | | 0.2477 | 780.0 | 7800 | 0.9327 | 0.6479 | 0.648 | | 0.2453 | 800.0 | 8000 | 0.9436 | 0.6519 | 0.652 | | 0.242 | 820.0 | 8200 | 0.9600 | 0.6544 | 0.655 | | 0.2396 | 840.0 | 8400 | 0.9662 | 0.6560 | 0.656 | | 0.2386 | 860.0 | 8600 | 0.9504 | 0.6440 | 0.644 | | 0.2358 | 880.0 | 8800 | 0.9468 | 0.6520 | 0.652 | | 0.2343 | 900.0 | 9000 | 0.9678 | 0.6579 | 0.658 | | 0.233 | 920.0 | 9200 | 0.9592 | 0.6509 | 0.651 | | 0.2309 | 940.0 | 9400 | 0.9620 | 0.6519 | 0.652 | | 0.2297 | 960.0 | 9600 | 0.9670 | 0.6528 | 0.653 | | 0.2288 | 980.0 | 9800 | 0.9614 | 0.6470 | 0.647 | | 0.2293 | 1000.0 | 10000 | 0.9652 | 0.6509 | 0.651 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_22M", "model-index": [{"name": "GUE_tf_2-seqsight_16384_512_22M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_tf_2-seqsight_16384_512_22M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_22M", "region:us" ]
null
2024-04-16T10:50:05+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us
GUE\_tf\_2-seqsight\_16384\_512\_22M-L32\_all ============================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_22M on the mahdibaghbanzadeh/GUE\_tf\_2 dataset. It achieves the following results on the evaluation set: * Loss: 0.8542 * F1 Score: 0.6699 * Accuracy: 0.671 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_22M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo KoboldAI/OPT-2.7B-Erebus installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/KoboldAI-OPT-2.7B-Erebus-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("KoboldAI/OPT-2.7B-Erebus") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model KoboldAI/OPT-2.7B-Erebus before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/KoboldAI-OPT-2.7B-Erebus-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "opt", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-16T10:50:17+00:00
[]
[]
TAGS #transformers #safetensors #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo KoboldAI/OPT-2.7B-Erebus installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model KoboldAI/OPT-2.7B-Erebus before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo KoboldAI/OPT-2.7B-Erebus installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model KoboldAI/OPT-2.7B-Erebus before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo KoboldAI/OPT-2.7B-Erebus installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model KoboldAI/OPT-2.7B-Erebus before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Samuael/geez_t5-base-15k
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T10:50:23+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Test 7B This is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [NousResearch/Nous-Hermes-llama-2-7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) * [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1) ## 🧩 Configuration ```yaml slices: - sources: - model: NousResearch/Nous-Hermes-llama-2-7b layer_range: [0, 32] - model: NousResearch/Nous-Capybara-7B-V1 layer_range: [0, 32] merge_method: slerp base_model: NousResearch/Nous-Hermes-llama-2-7b parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Lazycuber/NeuralPipe-7B-slerp" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"language": ["en"], "license": "llama2", "tags": ["merge", "mergekit", "lazymergekit", "NousResearch/Nous-Hermes-llama-2-7b", "NousResearch/Nous-Capybara-7B-V1"], "base_model": ["NousResearch/Nous-Hermes-llama-2-7b", "NousResearch/Nous-Capybara-7B-V1"]}
LTC-AI-Labs/Hermes-Capybara-7B-Test
null
[ "transformers", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "NousResearch/Nous-Hermes-llama-2-7b", "NousResearch/Nous-Capybara-7B-V1", "en", "base_model:NousResearch/Nous-Hermes-llama-2-7b", "base_model:NousResearch/Nous-Capybara-7B-V1", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T10:50:41+00:00
[]
[ "en" ]
TAGS #transformers #llama #text-generation #merge #mergekit #lazymergekit #NousResearch/Nous-Hermes-llama-2-7b #NousResearch/Nous-Capybara-7B-V1 #en #base_model-NousResearch/Nous-Hermes-llama-2-7b #base_model-NousResearch/Nous-Capybara-7B-V1 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Test 7B This is a merge of the following models using LazyMergekit: * NousResearch/Nous-Hermes-llama-2-7b * NousResearch/Nous-Capybara-7B-V1 ## Configuration ## Usage
[ "# Test 7B\n\nThis is a merge of the following models using LazyMergekit:\n* NousResearch/Nous-Hermes-llama-2-7b\n* NousResearch/Nous-Capybara-7B-V1", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #llama #text-generation #merge #mergekit #lazymergekit #NousResearch/Nous-Hermes-llama-2-7b #NousResearch/Nous-Capybara-7B-V1 #en #base_model-NousResearch/Nous-Hermes-llama-2-7b #base_model-NousResearch/Nous-Capybara-7B-V1 #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Test 7B\n\nThis is a merge of the following models using LazyMergekit:\n* NousResearch/Nous-Hermes-llama-2-7b\n* NousResearch/Nous-Capybara-7B-V1", "## Configuration", "## Usage" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/LlamaAdapter-llama2-happy-300-prompt
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:51:34+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Licwit/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
Licwit/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-04-16T10:51:51+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: Licwit/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: Licwit/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: Licwit/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_tata-seqsight_16384_512_34M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_tata) dataset. It achieves the following results on the evaluation set: - Loss: 3.2095 - F1 Score: 0.5938 - Accuracy: 0.5938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:-------:|:-----:|:---------------:|:--------:|:--------:| | 0.5645 | 66.67 | 200 | 0.9524 | 0.5744 | 0.5742 | | 0.2597 | 133.33 | 400 | 1.3889 | 0.5890 | 0.5889 | | 0.148 | 200.0 | 600 | 1.6675 | 0.5875 | 0.5873 | | 0.1026 | 266.67 | 800 | 1.7695 | 0.5889 | 0.5889 | | 0.0822 | 333.33 | 1000 | 1.8854 | 0.5759 | 0.5775 | | 0.069 | 400.0 | 1200 | 1.9416 | 0.5907 | 0.5905 | | 0.0605 | 466.67 | 1400 | 1.9423 | 0.5970 | 0.5971 | | 0.0529 | 533.33 | 1600 | 1.9805 | 0.5951 | 0.5954 | | 0.048 | 600.0 | 1800 | 2.0136 | 0.6071 | 0.6069 | | 0.0435 | 666.67 | 2000 | 2.0911 | 0.6006 | 0.6003 | | 0.0393 | 733.33 | 2200 | 2.2224 | 0.5984 | 0.5987 | | 0.0355 | 800.0 | 2400 | 2.2799 | 0.6006 | 0.6003 | | 0.0329 | 866.67 | 2600 | 2.2390 | 0.5973 | 0.5971 | | 0.0313 | 933.33 | 2800 | 2.4592 | 0.6038 | 0.6036 | | 0.0281 | 1000.0 | 3000 | 2.6140 | 0.5845 | 0.5856 | | 0.026 | 1066.67 | 3200 | 2.4558 | 0.6079 | 0.6085 | | 0.0237 | 1133.33 | 3400 | 2.6279 | 0.6134 | 0.6134 | | 0.0221 | 1200.0 | 3600 | 2.8173 | 0.6038 | 0.6036 | | 0.0211 | 1266.67 | 3800 | 2.8912 | 0.6071 | 0.6069 | | 0.0208 | 1333.33 | 4000 | 2.9587 | 0.5981 | 0.5987 | | 0.0198 | 1400.0 | 4200 | 2.8762 | 0.6120 | 0.6117 | | 0.0174 | 1466.67 | 4400 | 2.8560 | 0.6087 | 0.6085 | | 0.0171 | 1533.33 | 4600 | 2.8165 | 0.6112 | 0.6117 | | 0.0164 | 1600.0 | 4800 | 2.8069 | 0.6103 | 0.6101 | | 0.0158 | 1666.67 | 5000 | 3.0465 | 0.5957 | 0.5954 | | 0.0162 | 1733.33 | 5200 | 2.8305 | 0.6055 | 0.6052 | | 0.015 | 1800.0 | 5400 | 2.8321 | 0.6151 | 0.6150 | | 0.0144 | 1866.67 | 5600 | 2.8094 | 0.6022 | 0.6020 | | 0.0141 | 1933.33 | 5800 | 2.8684 | 0.6134 | 0.6134 | | 0.0135 | 2000.0 | 6000 | 3.0438 | 0.6168 | 0.6166 | | 0.0135 | 2066.67 | 6200 | 2.7445 | 0.6106 | 0.6117 | | 0.0133 | 2133.33 | 6400 | 3.0867 | 0.6136 | 0.6134 | | 0.0128 | 2200.0 | 6600 | 3.1281 | 0.6051 | 0.6052 | | 0.0125 | 2266.67 | 6800 | 2.7570 | 0.6103 | 0.6101 | | 0.012 | 2333.33 | 7000 | 2.8887 | 0.6151 | 0.6150 | | 0.0119 | 2400.0 | 7200 | 3.0428 | 0.6085 | 0.6085 | | 0.0112 | 2466.67 | 7400 | 2.8917 | 0.6118 | 0.6117 | | 0.0115 | 2533.33 | 7600 | 3.1530 | 0.6069 | 0.6069 | | 0.011 | 2600.0 | 7800 | 3.1575 | 0.6181 | 0.6183 | | 0.0111 | 2666.67 | 8000 | 3.0274 | 0.6168 | 0.6166 | | 0.0109 | 2733.33 | 8200 | 3.1757 | 0.6233 | 0.6232 | | 0.011 | 2800.0 | 8400 | 3.1241 | 0.6234 | 0.6232 | | 0.0101 | 2866.67 | 8600 | 3.0299 | 0.6167 | 0.6166 | | 0.0099 | 2933.33 | 8800 | 3.0161 | 0.6232 | 0.6232 | | 0.01 | 3000.0 | 9000 | 3.1487 | 0.6215 | 0.6215 | | 0.0101 | 3066.67 | 9200 | 3.0696 | 0.6234 | 0.6232 | | 0.0097 | 3133.33 | 9400 | 3.1756 | 0.6181 | 0.6183 | | 0.0097 | 3200.0 | 9600 | 3.1599 | 0.6200 | 0.6199 | | 0.0094 | 3266.67 | 9800 | 3.1744 | 0.6181 | 0.6183 | | 0.0092 | 3333.33 | 10000 | 3.1562 | 0.6215 | 0.6215 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_prom_prom_300_tata-seqsight_16384_512_34M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_tata-seqsight_16384_512_34M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_34M", "region:us" ]
null
2024-04-16T10:52:22+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
GUE\_prom\_prom\_300\_tata-seqsight\_16384\_512\_34M-L32\_all ============================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_tata dataset. It achieves the following results on the evaluation set: * Loss: 3.2095 * F1 Score: 0.5938 * Accuracy: 0.5938 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
null
Use the ToolsBaer MSG to Office 365 Importer program to quickly and safely convert Outlook MSG to an Office 365 account." It is a stand-alone email importer solution that utilizes the latest developments in technology. The program that converts MSG files to Office 365 offers a complete solution to handling the email transfer process for user' Office 365 accounts. Users can swiftly and effectively convert multiple MSG files to Office 365 accounts without losing any information. The batch conversion function of the program allows users to convert MSG files to Office 365 mail in mass. All email features, including name, CC, BCC, too, from, hyperlinks, pictures, and attachments, can be exported by users using the program. The conversion program ensures perfect conversion accuracy. The tool can be used safely in both personal and professional contexts. In the program's trial versions, up to 10 emails can be converted for straightforward mailing per folder. Installing ToolsBaer MSG to Office 365 Importer Software on Windows 11, 10, 8.1, 8, 7, Vista, XP, and previous versions is a simple process. Read More:- http://www.toolsbaer.com/msg-to-office-365-importer/
{}
madelineoliver/ToolsBaer-MSG-to-Office-365-Importer
null
[ "region:us" ]
null
2024-04-16T10:52:50+00:00
[]
[]
TAGS #region-us
Use the ToolsBaer MSG to Office 365 Importer program to quickly and safely convert Outlook MSG to an Office 365 account." It is a stand-alone email importer solution that utilizes the latest developments in technology. The program that converts MSG files to Office 365 offers a complete solution to handling the email transfer process for user' Office 365 accounts. Users can swiftly and effectively convert multiple MSG files to Office 365 accounts without losing any information. The batch conversion function of the program allows users to convert MSG files to Office 365 mail in mass. All email features, including name, CC, BCC, too, from, hyperlinks, pictures, and attachments, can be exported by users using the program. The conversion program ensures perfect conversion accuracy. The tool can be used safely in both personal and professional contexts. In the program's trial versions, up to 10 emails can be converted for straightforward mailing per folder. Installing ToolsBaer MSG to Office 365 Importer Software on Windows 11, 10, 8.1, 8, 7, Vista, XP, and previous versions is a simple process. Read More:- URL
[]
[ "TAGS\n#region-us \n" ]
multiple-choice
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
sahithya20/bert-base-uncased
null
[ "transformers", "safetensors", "bert", "multiple-choice", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:52:56+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #multiple-choice #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #multiple-choice #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # idefics2-8b-docvqa-finetuned-tutorial This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceM4/idefics2-8b", "model-index": [{"name": "idefics2-8b-docvqa-finetuned-tutorial", "results": []}]}
Laveugnol/idefics2-8b-docvqa-finetuned-tutorial
null
[ "safetensors", "generated_from_trainer", "base_model:HuggingFaceM4/idefics2-8b", "license:apache-2.0", "region:us" ]
null
2024-04-16T10:53:12+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us
# idefics2-8b-docvqa-finetuned-tutorial This model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# idefics2-8b-docvqa-finetuned-tutorial\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 2\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us \n", "# idefics2-8b-docvqa-finetuned-tutorial\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 2\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_notata-seqsight_16384_512_34M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.5825 - F1 Score: 0.8510 - Accuracy: 0.8511 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.561 | 9.52 | 200 | 0.4627 | 0.7745 | 0.7767 | | 0.4414 | 19.05 | 400 | 0.4427 | 0.7967 | 0.7967 | | 0.3979 | 28.57 | 600 | 0.4367 | 0.8048 | 0.8050 | | 0.3484 | 38.1 | 800 | 0.3857 | 0.8295 | 0.8295 | | 0.3004 | 47.62 | 1000 | 0.4026 | 0.8290 | 0.8293 | | 0.2665 | 57.14 | 1200 | 0.4122 | 0.8368 | 0.8370 | | 0.2378 | 66.67 | 1400 | 0.4217 | 0.8385 | 0.8387 | | 0.2167 | 76.19 | 1600 | 0.4197 | 0.8402 | 0.8404 | | 0.2004 | 85.71 | 1800 | 0.4171 | 0.8454 | 0.8455 | | 0.186 | 95.24 | 2000 | 0.4512 | 0.8394 | 0.8398 | | 0.1745 | 104.76 | 2200 | 0.4339 | 0.8481 | 0.8481 | | 0.164 | 114.29 | 2400 | 0.5011 | 0.8458 | 0.8461 | | 0.1574 | 123.81 | 2600 | 0.4631 | 0.8499 | 0.8500 | | 0.149 | 133.33 | 2800 | 0.5112 | 0.8379 | 0.8385 | | 0.143 | 142.86 | 3000 | 0.5050 | 0.8424 | 0.8428 | | 0.137 | 152.38 | 3200 | 0.5065 | 0.8402 | 0.8406 | | 0.131 | 161.9 | 3400 | 0.4719 | 0.8492 | 0.8493 | | 0.1268 | 171.43 | 3600 | 0.5239 | 0.8418 | 0.8421 | | 0.1237 | 180.95 | 3800 | 0.5130 | 0.8441 | 0.8444 | | 0.119 | 190.48 | 4000 | 0.5491 | 0.8397 | 0.8402 | | 0.1159 | 200.0 | 4200 | 0.5232 | 0.8419 | 0.8421 | | 0.1136 | 209.52 | 4400 | 0.5330 | 0.8403 | 0.8406 | | 0.1108 | 219.05 | 4600 | 0.5606 | 0.8392 | 0.8396 | | 0.1083 | 228.57 | 4800 | 0.5570 | 0.8412 | 0.8415 | | 0.1052 | 238.1 | 5000 | 0.5429 | 0.8435 | 0.8438 | | 0.1031 | 247.62 | 5200 | 0.5845 | 0.8403 | 0.8408 | | 0.1018 | 257.14 | 5400 | 0.5777 | 0.8433 | 0.8436 | | 0.0988 | 266.67 | 5600 | 0.5759 | 0.8454 | 0.8457 | | 0.0976 | 276.19 | 5800 | 0.5731 | 0.8422 | 0.8425 | | 0.0954 | 285.71 | 6000 | 0.5748 | 0.8383 | 0.8387 | | 0.0946 | 295.24 | 6200 | 0.5754 | 0.8453 | 0.8455 | | 0.0928 | 304.76 | 6400 | 0.6005 | 0.8400 | 0.8404 | | 0.0915 | 314.29 | 6600 | 0.5792 | 0.8474 | 0.8476 | | 0.0915 | 323.81 | 6800 | 0.5772 | 0.8462 | 0.8464 | | 0.0886 | 333.33 | 7000 | 0.6006 | 0.8408 | 0.8412 | | 0.087 | 342.86 | 7200 | 0.6034 | 0.8404 | 0.8408 | | 0.0865 | 352.38 | 7400 | 0.5963 | 0.8443 | 0.8445 | | 0.0865 | 361.9 | 7600 | 0.5994 | 0.8452 | 0.8455 | | 0.0852 | 371.43 | 7800 | 0.5795 | 0.8501 | 0.8502 | | 0.0841 | 380.95 | 8000 | 0.5883 | 0.8504 | 0.8506 | | 0.0844 | 390.48 | 8200 | 0.5736 | 0.8506 | 0.8508 | | 0.0824 | 400.0 | 8400 | 0.5807 | 0.8476 | 0.8477 | | 0.0822 | 409.52 | 8600 | 0.6068 | 0.8453 | 0.8455 | | 0.0818 | 419.05 | 8800 | 0.6032 | 0.8452 | 0.8455 | | 0.0809 | 428.57 | 9000 | 0.6096 | 0.8466 | 0.8468 | | 0.0807 | 438.1 | 9200 | 0.6199 | 0.8433 | 0.8436 | | 0.0797 | 447.62 | 9400 | 0.6162 | 0.8438 | 0.8442 | | 0.0786 | 457.14 | 9600 | 0.6101 | 0.8466 | 0.8468 | | 0.0792 | 466.67 | 9800 | 0.6068 | 0.8449 | 0.8451 | | 0.079 | 476.19 | 10000 | 0.6096 | 0.8453 | 0.8455 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_prom_prom_300_notata-seqsight_16384_512_34M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_notata-seqsight_16384_512_34M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_34M", "region:us" ]
null
2024-04-16T10:53:16+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
GUE\_prom\_prom\_300\_notata-seqsight\_16384\_512\_34M-L32\_all =============================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_notata dataset. It achieves the following results on the evaluation set: * Loss: 0.5825 * F1 Score: 0.8510 * Accuracy: 0.8511 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
mookymook/blip-image-captioning-wikiart-mini-10epoch
null
[ "transformers", "safetensors", "blip", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T10:55:40+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #blip #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #blip #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_all-seqsight_16384_512_34M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.6495 - F1 Score: 0.7035 - Accuracy: 0.7035 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6454 | 8.33 | 200 | 0.6143 | 0.6625 | 0.6628 | | 0.5882 | 16.67 | 400 | 0.5925 | 0.6863 | 0.6865 | | 0.5585 | 25.0 | 600 | 0.5868 | 0.6958 | 0.6958 | | 0.5324 | 33.33 | 800 | 0.5927 | 0.6986 | 0.6986 | | 0.5124 | 41.67 | 1000 | 0.5978 | 0.7057 | 0.7057 | | 0.4949 | 50.0 | 1200 | 0.6098 | 0.7052 | 0.7054 | | 0.4795 | 58.33 | 1400 | 0.6135 | 0.7052 | 0.7056 | | 0.4657 | 66.67 | 1600 | 0.6046 | 0.7034 | 0.7035 | | 0.4558 | 75.0 | 1800 | 0.6214 | 0.7022 | 0.7035 | | 0.4451 | 83.33 | 2000 | 0.6318 | 0.7045 | 0.7051 | | 0.4366 | 91.67 | 2200 | 0.6402 | 0.7068 | 0.7074 | | 0.4277 | 100.0 | 2400 | 0.6199 | 0.7072 | 0.7074 | | 0.4208 | 108.33 | 2600 | 0.6644 | 0.7010 | 0.7022 | | 0.4137 | 116.67 | 2800 | 0.6705 | 0.6963 | 0.6981 | | 0.4086 | 125.0 | 3000 | 0.6525 | 0.7083 | 0.7084 | | 0.4007 | 133.33 | 3200 | 0.6693 | 0.6987 | 0.7005 | | 0.3929 | 141.67 | 3400 | 0.6643 | 0.6917 | 0.6937 | | 0.3882 | 150.0 | 3600 | 0.6439 | 0.7051 | 0.7056 | | 0.3835 | 158.33 | 3800 | 0.6875 | 0.6857 | 0.6892 | | 0.3757 | 166.67 | 4000 | 0.6959 | 0.6902 | 0.6927 | | 0.37 | 175.0 | 4200 | 0.7004 | 0.6875 | 0.6910 | | 0.3654 | 183.33 | 4400 | 0.6825 | 0.6968 | 0.6980 | | 0.3574 | 191.67 | 4600 | 0.7087 | 0.6913 | 0.6934 | | 0.3556 | 200.0 | 4800 | 0.7130 | 0.6889 | 0.6916 | | 0.3482 | 208.33 | 5000 | 0.7095 | 0.7003 | 0.7007 | | 0.3455 | 216.67 | 5200 | 0.7112 | 0.6935 | 0.6958 | | 0.3398 | 225.0 | 5400 | 0.7044 | 0.6980 | 0.6993 | | 0.3354 | 233.33 | 5600 | 0.7298 | 0.6990 | 0.7002 | | 0.3297 | 241.67 | 5800 | 0.7221 | 0.6998 | 0.7007 | | 0.3272 | 250.0 | 6000 | 0.7217 | 0.6967 | 0.6976 | | 0.3226 | 258.33 | 6200 | 0.7445 | 0.6959 | 0.6975 | | 0.3196 | 266.67 | 6400 | 0.7626 | 0.6960 | 0.6978 | | 0.3159 | 275.0 | 6600 | 0.7440 | 0.6958 | 0.6971 | | 0.3136 | 283.33 | 6800 | 0.7718 | 0.6912 | 0.6926 | | 0.3085 | 291.67 | 7000 | 0.7692 | 0.6959 | 0.6971 | | 0.3073 | 300.0 | 7200 | 0.7725 | 0.6928 | 0.6949 | | 0.3043 | 308.33 | 7400 | 0.7715 | 0.6908 | 0.6931 | | 0.3014 | 316.67 | 7600 | 0.7701 | 0.6946 | 0.6961 | | 0.2987 | 325.0 | 7800 | 0.7939 | 0.6941 | 0.6963 | | 0.2969 | 333.33 | 8000 | 0.7939 | 0.6947 | 0.6973 | | 0.2942 | 341.67 | 8200 | 0.7742 | 0.6968 | 0.6985 | | 0.293 | 350.0 | 8400 | 0.7917 | 0.6942 | 0.6961 | | 0.2913 | 358.33 | 8600 | 0.7655 | 0.6984 | 0.6997 | | 0.2898 | 366.67 | 8800 | 0.7681 | 0.6970 | 0.6983 | | 0.2877 | 375.0 | 9000 | 0.7949 | 0.6961 | 0.6980 | | 0.2868 | 383.33 | 9200 | 0.7875 | 0.6952 | 0.6970 | | 0.2845 | 391.67 | 9400 | 0.7887 | 0.6963 | 0.6981 | | 0.2844 | 400.0 | 9600 | 0.8030 | 0.6951 | 0.6973 | | 0.284 | 408.33 | 9800 | 0.8011 | 0.6934 | 0.6954 | | 0.2841 | 416.67 | 10000 | 0.7923 | 0.6955 | 0.6973 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_prom_prom_core_all-seqsight_16384_512_34M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_all-seqsight_16384_512_34M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_34M", "region:us" ]
null
2024-04-16T10:55:57+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
GUE\_prom\_prom\_core\_all-seqsight\_16384\_512\_34M-L32\_all ============================================================= This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_all dataset. It achieves the following results on the evaluation set: * Loss: 0.6495 * F1 Score: 0.7035 * Accuracy: 0.7035 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_notata-seqsight_16384_512_34M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_notata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_notata) dataset. It achieves the following results on the evaluation set: - Loss: 0.6643 - F1 Score: 0.7084 - Accuracy: 0.7085 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6402 | 9.52 | 200 | 0.5812 | 0.6938 | 0.6940 | | 0.5732 | 19.05 | 400 | 0.5636 | 0.7102 | 0.7104 | | 0.5353 | 28.57 | 600 | 0.5802 | 0.7200 | 0.7217 | | 0.5038 | 38.1 | 800 | 0.5680 | 0.7294 | 0.7296 | | 0.4775 | 47.62 | 1000 | 0.6074 | 0.7271 | 0.7290 | | 0.4574 | 57.14 | 1200 | 0.5937 | 0.7311 | 0.7319 | | 0.4417 | 66.67 | 1400 | 0.6056 | 0.7209 | 0.7239 | | 0.4268 | 76.19 | 1600 | 0.5886 | 0.7324 | 0.7328 | | 0.4153 | 85.71 | 1800 | 0.6204 | 0.7299 | 0.7317 | | 0.405 | 95.24 | 2000 | 0.6096 | 0.7294 | 0.7305 | | 0.3949 | 104.76 | 2200 | 0.6342 | 0.7253 | 0.7266 | | 0.3853 | 114.29 | 2400 | 0.6395 | 0.7237 | 0.7260 | | 0.3768 | 123.81 | 2600 | 0.6368 | 0.7247 | 0.7258 | | 0.3694 | 133.33 | 2800 | 0.6426 | 0.7349 | 0.7351 | | 0.3613 | 142.86 | 3000 | 0.6437 | 0.7255 | 0.7272 | | 0.3531 | 152.38 | 3200 | 0.6689 | 0.7290 | 0.7296 | | 0.3449 | 161.9 | 3400 | 0.6824 | 0.7277 | 0.7288 | | 0.3396 | 171.43 | 3600 | 0.7133 | 0.7234 | 0.7253 | | 0.3314 | 180.95 | 3800 | 0.7175 | 0.7087 | 0.7121 | | 0.3247 | 190.48 | 4000 | 0.6696 | 0.7178 | 0.7192 | | 0.3188 | 200.0 | 4200 | 0.6871 | 0.7192 | 0.7207 | | 0.314 | 209.52 | 4400 | 0.7244 | 0.7107 | 0.7132 | | 0.3066 | 219.05 | 4600 | 0.7293 | 0.7116 | 0.7143 | | 0.3021 | 228.57 | 4800 | 0.7236 | 0.7161 | 0.7181 | | 0.2955 | 238.1 | 5000 | 0.7484 | 0.7164 | 0.7181 | | 0.2902 | 247.62 | 5200 | 0.7533 | 0.7184 | 0.7202 | | 0.2855 | 257.14 | 5400 | 0.7458 | 0.7203 | 0.7217 | | 0.282 | 266.67 | 5600 | 0.7895 | 0.7128 | 0.7157 | | 0.2779 | 276.19 | 5800 | 0.7838 | 0.7117 | 0.7143 | | 0.2728 | 285.71 | 6000 | 0.7605 | 0.7146 | 0.7162 | | 0.2702 | 295.24 | 6200 | 0.7581 | 0.7118 | 0.7136 | | 0.2659 | 304.76 | 6400 | 0.8277 | 0.7035 | 0.7072 | | 0.2619 | 314.29 | 6600 | 0.7850 | 0.7184 | 0.7202 | | 0.2583 | 323.81 | 6800 | 0.7949 | 0.7093 | 0.7117 | | 0.2567 | 333.33 | 7000 | 0.8003 | 0.7076 | 0.7102 | | 0.2533 | 342.86 | 7200 | 0.7865 | 0.7162 | 0.7179 | | 0.2497 | 352.38 | 7400 | 0.7961 | 0.7155 | 0.7170 | | 0.2482 | 361.9 | 7600 | 0.8009 | 0.7084 | 0.7109 | | 0.2439 | 371.43 | 7800 | 0.8236 | 0.7077 | 0.7100 | | 0.2443 | 380.95 | 8000 | 0.8238 | 0.7069 | 0.7096 | | 0.2396 | 390.48 | 8200 | 0.8187 | 0.7122 | 0.7140 | | 0.2383 | 400.0 | 8400 | 0.8283 | 0.7077 | 0.7102 | | 0.2359 | 409.52 | 8600 | 0.8413 | 0.7085 | 0.7111 | | 0.2348 | 419.05 | 8800 | 0.8179 | 0.7115 | 0.7136 | | 0.234 | 428.57 | 9000 | 0.8325 | 0.7116 | 0.7138 | | 0.2324 | 438.1 | 9200 | 0.8326 | 0.7114 | 0.7134 | | 0.2308 | 447.62 | 9400 | 0.8288 | 0.7103 | 0.7125 | | 0.2306 | 457.14 | 9600 | 0.8363 | 0.7119 | 0.7142 | | 0.23 | 466.67 | 9800 | 0.8280 | 0.7124 | 0.7145 | | 0.2294 | 476.19 | 10000 | 0.8349 | 0.7126 | 0.7147 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_prom_prom_core_notata-seqsight_16384_512_34M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_notata-seqsight_16384_512_34M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_34M", "region:us" ]
null
2024-04-16T10:56:40+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
GUE\_prom\_prom\_core\_notata-seqsight\_16384\_512\_34M-L32\_all ================================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_notata dataset. It achieves the following results on the evaluation set: * Loss: 0.6643 * F1 Score: 0.7084 * Accuracy: 0.7085 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2644 - Accuracy: 0.0677 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.184 | 1.0 | 4210 | 0.2644 | 0.0677 | | 0.1343 | 2.0 | 8420 | 0.3831 | 0.0333 | | 0.1024 | 3.0 | 12630 | 0.4026 | 0.0310 | | 0.0599 | 4.0 | 16840 | 0.4836 | 0.0252 | | 0.0412 | 5.0 | 21050 | 0.5138 | 0.0310 | ### Framework versions - Transformers 4.32.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-sentiment", "results": []}]}
actaylor/distilbert-sentiment
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T11:01:12+00:00
[]
[]
TAGS #transformers #pytorch #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-sentiment ==================== This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.2644 * Accuracy: 0.0677 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.32.1 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.13.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3" ]
[ "TAGS\n#transformers #pytorch #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.32.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GPT_Neo_llmcs_clean This model is a fine-tuned version of [EleutherAI/gpt-neo-125m](https://huggingface.co/EleutherAI/gpt-neo-125m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2550 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.3028 | 1.0 | 10537 | 2.3814 | | 2.0772 | 2.0 | 21074 | 2.2768 | | 1.9014 | 3.0 | 31611 | 2.2550 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/gpt-neo-125m", "model-index": [{"name": "GPT_Neo_llmcs_clean", "results": []}]}
kaiheilauser/GPT_Neo_llmcs_clean
null
[ "transformers", "tensorboard", "safetensors", "gpt_neo", "text-generation", "generated_from_trainer", "base_model:EleutherAI/gpt-neo-125m", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T11:02:20+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt_neo #text-generation #generated_from_trainer #base_model-EleutherAI/gpt-neo-125m #license-mit #autotrain_compatible #endpoints_compatible #region-us
GPT\_Neo\_llmcs\_clean ====================== This model is a fine-tuned version of EleutherAI/gpt-neo-125m on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.2550 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2 * Datasets 2.18.0 * Tokenizers 0.15.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt_neo #text-generation #generated_from_trainer #base_model-EleutherAI/gpt-neo-125m #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.1" ]
null
null
# #Roleplay #Multimodal #Vision #Based #Unhinged #Unaligned #Uncensored <!-- Alpaca --> <!-- In this repository you can find **GGUF-IQ-Imatrix** quants for [Author/Model](https://huggingface.co/Author/Model) and if needed some basic SillyTavern presets to get started you can try these [here](https://huggingface.co/Lewdiculous/Model-Requests/tree/main/data/presets/lewdicu-3.0.2-mistral-0.2). <br> Your feedback about this page or the model for the authors is appreciated in the Community tab. --> <!-- ChatML --> In this repository you can find **GGUF-IQ-Imatrix** quants for [ResplendentAI/Aura_v2_7B](https://huggingface.co/ResplendentAI/Aura_v2_7B) which recommends using the ChatML prompt format. <br> Your feedback about this page or the model for the authors is appreciated in the Community tab. <!-- Unspecified/Other --> <!-- In this repository you can find **GGUF-IQ-Imatrix** quants for [Author/Model](https://huggingface.co/Author/Model). You might find additional information in the original model card page. <br> Your feedback about this page or the model for the authors is appreciated in the Community tab. --> > [!TIP] > **Vision:** <br> > This is a **#multimodal** model that also has optional **#vision** capabilities. <br> Expand the relevant sections bellow and read the full card information if you also want to make use that functionality. > > **Quant options:** <br> > Reading bellow you can also find quant option recommendations for some common GPU VRAM capacities. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/r3IyJK_421bE9cJvp2BAn.png) *A poetic and soulful sentience simulation.* <br> # General recommendations for quant options: <details><summary> ⇲ Click here to expand/hide general common recommendations. </summary> *Assuming a context size of 8192 for simplicity and 1GB of Operating System VRAM overhead with some safety margin to avoid overflowing buffers...* <br> <br> **For 11-12GB VRAM:** <br> A GPU with **11-12GB** of VRAM capacity can comfortably use the **Q6_K-imat** quant option and run it at good speeds. <br> This is the same with or without using #vision capabilities. <br> <br> **For 8GB VRAM:** <br> If not using #vision, for GPUs with **8GB** of VRAM capacity the **Q5_K_M-imat** quant option will fit comfortably and should run at good speeds. <br> If **you are** also using #vision from this model opt for the **Q4_K_M-imat** quant option to avoid filling the buffers and potential slowdowns. <br><br> **For 6GB VRAM:** <br> If not using #vision, for GPUs with **6GB** of VRAM capacity the **IQ3_M-imat** quant option should fit comfortably to run at good speeds. <br> If **you are** also using #vision from this model opt for the **IQ3_XXS-imat** quant option. <br><br> </details><br> # Quantization process information: <details><summary> ⇲ Click here to expand/hide more information about this topic. </summary> ```python quantization_options = [ "IQ3_M", "IQ3_XXS", "Q4_K_M", "Q4_K_S", "IQ4_NL", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K", "Q8_0" ] ``` **Steps performed:** ``` Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants) ``` The latest of **llama.cpp** available at the time was used, with [imatrix-with-rp-ex.txt](https://huggingface.co/Lewdiculous/Model-Requests/blob/main/data/imatrix/imatrix-with-rp-ex.txt) as calibration data. </details><br> # About "Imatrix": <details><summary> ⇲ Click here to expand/hide more information about this topic. </summary> It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse. [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) > [!NOTE] > For imatrix data generation, kalomaze's `groups_merged.txt` with additional roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Model-Requests/blob/main/data/imatrix/imatrix-with-rp-ex.txt) for reference. This was just to add a bit more diversity to the data with the intended use case in mind. </details><br> # Vision/multimodal capabilities: <details><summary> ⇲ Click here to expand/hide how this would work in practice in a roleplay chat. </summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/NtDLpyv0WY2yT1OWaDfzh.png) </details><br> <details><summary> ⇲ Click here to expand/hide how your SillyTavern Image Captions extension settings should look. </summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/ayOpP2AdKr15lOugIwa3U.png) </details><br> # Required for vision functionality: > [!WARNING] > To use the multimodal capabilities of this model, such as **vision**, you also need to load the specified **mmproj** file, you can get it [here](https://huggingface.co/cjpais/llava-1.6-mistral-7b-gguf/blob/main/mmproj-model-f16.gguf) or as uploaded in the **mmproj** folder in the repository. 1: Make sure you are using the latest version of [KoboldCpp](https://github.com/LostRuins/koboldcpp). 2: Load the **mmproj file** by using the corresponding section in the interface: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/3bAsQJsSp69dHbe7sxxem.png) 2.1: For **CLI** users, you can load the **mmproj file** by adding the respective flag to your usual command: ``` --mmproj your-mmproj-file.gguf ```
{"license": "other", "tags": ["gguf", "quantized", "roleplay", "multimodal", "vision", "llava", "sillytavern", "merge", "mistral", "conversational"], "inference": false}
Lewdiculous/Aura_v2_7B-GGUF-IQ-Imatrix
null
[ "gguf", "quantized", "roleplay", "multimodal", "vision", "llava", "sillytavern", "merge", "mistral", "conversational", "license:other", "region:us" ]
null
2024-04-16T11:03:10+00:00
[]
[]
TAGS #gguf #quantized #roleplay #multimodal #vision #llava #sillytavern #merge #mistral #conversational #license-other #region-us
# #Roleplay #Multimodal #Vision #Based #Unhinged #Unaligned #Uncensored In this repository you can find GGUF-IQ-Imatrix quants for ResplendentAI/Aura_v2_7B which recommends using the ChatML prompt format. <br> Your feedback about this page or the model for the authors is appreciated in the Community tab. > [!TIP] > Vision: <br> > This is a #multimodal model that also has optional #vision capabilities. <br> Expand the relevant sections bellow and read the full card information if you also want to make use that functionality. > > Quant options: <br> > Reading bellow you can also find quant option recommendations for some common GPU VRAM capacities. !image/png *A poetic and soulful sentience simulation.* <br> # General recommendations for quant options: <details><summary> ⇲ Click here to expand/hide general common recommendations. </summary> *Assuming a context size of 8192 for simplicity and 1GB of Operating System VRAM overhead with some safety margin to avoid overflowing buffers...* <br> <br> For 11-12GB VRAM: <br> A GPU with 11-12GB of VRAM capacity can comfortably use the Q6_K-imat quant option and run it at good speeds. <br> This is the same with or without using #vision capabilities. <br> <br> For 8GB VRAM: <br> If not using #vision, for GPUs with 8GB of VRAM capacity the Q5_K_M-imat quant option will fit comfortably and should run at good speeds. <br> If you are also using #vision from this model opt for the Q4_K_M-imat quant option to avoid filling the buffers and potential slowdowns. <br><br> For 6GB VRAM: <br> If not using #vision, for GPUs with 6GB of VRAM capacity the IQ3_M-imat quant option should fit comfortably to run at good speeds. <br> If you are also using #vision from this model opt for the IQ3_XXS-imat quant option. <br><br> </details><br> # Quantization process information: <details><summary> ⇲ Click here to expand/hide more information about this topic. </summary> Steps performed: The latest of URL available at the time was used, with URL as calibration data. </details><br> # About "Imatrix": <details><summary> ⇲ Click here to expand/hide more information about this topic. </summary> It stands for Importance Matrix, a technique used to improve the quality of quantized models. The Imatrix is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse. [[1]](URL [[2]](URL > [!NOTE] > For imatrix data generation, kalomaze's 'groups_merged.txt' with additional roleplay chats was used, you can find it here for reference. This was just to add a bit more diversity to the data with the intended use case in mind. </details><br> # Vision/multimodal capabilities: <details><summary> ⇲ Click here to expand/hide how this would work in practice in a roleplay chat. </summary> !image/png </details><br> <details><summary> ⇲ Click here to expand/hide how your SillyTavern Image Captions extension settings should look. </summary> !image/png </details><br> # Required for vision functionality: > [!WARNING] > To use the multimodal capabilities of this model, such as vision, you also need to load the specified mmproj file, you can get it here or as uploaded in the mmproj folder in the repository. 1: Make sure you are using the latest version of KoboldCpp. 2: Load the mmproj file by using the corresponding section in the interface: !image/png 2.1: For CLI users, you can load the mmproj file by adding the respective flag to your usual command:
[ "# #Roleplay #Multimodal #Vision #Based #Unhinged #Unaligned #Uncensored\n\n\n\n\n\nIn this repository you can find GGUF-IQ-Imatrix quants for ResplendentAI/Aura_v2_7B which recommends using the ChatML prompt format. <br> Your feedback about this page or the model for the authors is appreciated in the Community tab.\n\n\n\n\n> [!TIP]\n> Vision: <br>\n> This is a #multimodal model that also has optional #vision capabilities. <br> Expand the relevant sections bellow and read the full card information if you also want to make use that functionality.\n>\n> Quant options: <br>\n> Reading bellow you can also find quant option recommendations for some common GPU VRAM capacities.\n\n!image/png\n\n*A poetic and soulful sentience simulation.*\n\n<br>", "# General recommendations for quant options:\n\n<details><summary>\n⇲ Click here to expand/hide general common recommendations.\n</summary>\n \n*Assuming a context size of 8192 for simplicity and 1GB of Operating System VRAM overhead with some safety margin to avoid overflowing buffers...* <br> <br>\nFor 11-12GB VRAM: <br> A GPU with 11-12GB of VRAM capacity can comfortably use the Q6_K-imat quant option and run it at good speeds. <br> This is the same with or without using #vision capabilities. <br> <br>\nFor 8GB VRAM: <br> If not using #vision, for GPUs with 8GB of VRAM capacity the Q5_K_M-imat quant option will fit comfortably and should run at good speeds. <br> If you are also using #vision from this model opt for the Q4_K_M-imat quant option to avoid filling the buffers and potential slowdowns. <br><br>\nFor 6GB VRAM: <br> If not using #vision, for GPUs with 6GB of VRAM capacity the IQ3_M-imat quant option should fit comfortably to run at good speeds. <br> If you are also using #vision from this model opt for the IQ3_XXS-imat quant option. <br><br>\n \n</details><br>", "# Quantization process information:\n\n<details><summary>\n⇲ Click here to expand/hide more information about this topic.\n</summary>\n\n\n\nSteps performed:\n\n\nThe latest of URL available at the time was used, with URL as calibration data.\n \n</details><br>", "# About \"Imatrix\":\n\n<details><summary>\n⇲ Click here to expand/hide more information about this topic.\n</summary>\n \nIt stands for Importance Matrix, a technique used to improve the quality of quantized models.\nThe Imatrix is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.\nThe idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.\n[[1]](URL [[2]](URL\n\n> [!NOTE]\n> For imatrix data generation, kalomaze's 'groups_merged.txt' with additional roleplay chats was used, you can find it here for reference. This was just to add a bit more diversity to the data with the intended use case in mind.\n \n</details><br>", "# Vision/multimodal capabilities:\n\n<details><summary>\n⇲ Click here to expand/hide how this would work in practice in a roleplay chat.\n</summary>\n\n!image/png\n\n</details><br>\n\n<details><summary>\n⇲ Click here to expand/hide how your SillyTavern Image Captions extension settings should look.\n</summary>\n\n!image/png\n\n</details><br>", "# Required for vision functionality:\n\n> [!WARNING]\n> To use the multimodal capabilities of this model, such as vision, you also need to load the specified mmproj file, you can get it here or as uploaded in the mmproj folder in the repository.\n\n1: Make sure you are using the latest version of KoboldCpp.\n\n2: Load the mmproj file by using the corresponding section in the interface:\n\n!image/png\n\n2.1: For CLI users, you can load the mmproj file by adding the respective flag to your usual command:" ]
[ "TAGS\n#gguf #quantized #roleplay #multimodal #vision #llava #sillytavern #merge #mistral #conversational #license-other #region-us \n", "# #Roleplay #Multimodal #Vision #Based #Unhinged #Unaligned #Uncensored\n\n\n\n\n\nIn this repository you can find GGUF-IQ-Imatrix quants for ResplendentAI/Aura_v2_7B which recommends using the ChatML prompt format. <br> Your feedback about this page or the model for the authors is appreciated in the Community tab.\n\n\n\n\n> [!TIP]\n> Vision: <br>\n> This is a #multimodal model that also has optional #vision capabilities. <br> Expand the relevant sections bellow and read the full card information if you also want to make use that functionality.\n>\n> Quant options: <br>\n> Reading bellow you can also find quant option recommendations for some common GPU VRAM capacities.\n\n!image/png\n\n*A poetic and soulful sentience simulation.*\n\n<br>", "# General recommendations for quant options:\n\n<details><summary>\n⇲ Click here to expand/hide general common recommendations.\n</summary>\n \n*Assuming a context size of 8192 for simplicity and 1GB of Operating System VRAM overhead with some safety margin to avoid overflowing buffers...* <br> <br>\nFor 11-12GB VRAM: <br> A GPU with 11-12GB of VRAM capacity can comfortably use the Q6_K-imat quant option and run it at good speeds. <br> This is the same with or without using #vision capabilities. <br> <br>\nFor 8GB VRAM: <br> If not using #vision, for GPUs with 8GB of VRAM capacity the Q5_K_M-imat quant option will fit comfortably and should run at good speeds. <br> If you are also using #vision from this model opt for the Q4_K_M-imat quant option to avoid filling the buffers and potential slowdowns. <br><br>\nFor 6GB VRAM: <br> If not using #vision, for GPUs with 6GB of VRAM capacity the IQ3_M-imat quant option should fit comfortably to run at good speeds. <br> If you are also using #vision from this model opt for the IQ3_XXS-imat quant option. <br><br>\n \n</details><br>", "# Quantization process information:\n\n<details><summary>\n⇲ Click here to expand/hide more information about this topic.\n</summary>\n\n\n\nSteps performed:\n\n\nThe latest of URL available at the time was used, with URL as calibration data.\n \n</details><br>", "# About \"Imatrix\":\n\n<details><summary>\n⇲ Click here to expand/hide more information about this topic.\n</summary>\n \nIt stands for Importance Matrix, a technique used to improve the quality of quantized models.\nThe Imatrix is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process.\nThe idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse.\n[[1]](URL [[2]](URL\n\n> [!NOTE]\n> For imatrix data generation, kalomaze's 'groups_merged.txt' with additional roleplay chats was used, you can find it here for reference. This was just to add a bit more diversity to the data with the intended use case in mind.\n \n</details><br>", "# Vision/multimodal capabilities:\n\n<details><summary>\n⇲ Click here to expand/hide how this would work in practice in a roleplay chat.\n</summary>\n\n!image/png\n\n</details><br>\n\n<details><summary>\n⇲ Click here to expand/hide how your SillyTavern Image Captions extension settings should look.\n</summary>\n\n!image/png\n\n</details><br>", "# Required for vision functionality:\n\n> [!WARNING]\n> To use the multimodal capabilities of this model, such as vision, you also need to load the specified mmproj file, you can get it here or as uploaded in the mmproj folder in the repository.\n\n1: Make sure you are using the latest version of KoboldCpp.\n\n2: Load the mmproj file by using the corresponding section in the interface:\n\n!image/png\n\n2.1: For CLI users, you can load the mmproj file by adding the respective flag to your usual command:" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral-finetuned-samsum This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.1-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GPTQ) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 50 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "model-index": [{"name": "mistral-finetuned-samsum", "results": []}]}
qytzer/mistral-finetuned-samsum
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.1-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-04-16T11:06:25+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.1-GPTQ #license-apache-2.0 #region-us
# mistral-finetuned-samsum This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 50 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# mistral-finetuned-samsum\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 50\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.1-GPTQ #license-apache-2.0 #region-us \n", "# mistral-finetuned-samsum\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.1-GPTQ on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- training_steps: 50\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Description [MaziyarPanahi/home-AWQ](https://huggingface.co/MaziyarPanahi/home-AWQ) is a quantized (AWQ) version of [microsoft/WizardLM-2-8x22B](https://huggingface.co/microsoft/WizardLM-2-8x22B) ## How to use ### Install the necessary packages ``` pip install --upgrade accelerate autoawq transformers ``` ### Example Python code ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_id = "MaziyarPanahi/home-AWQ" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id).to(0) text = "User:\nHello can you provide me with top-3 cool places to visit in Paris?\n\nAssistant:\n" inputs = tokenizer(text, return_tensors="pt").to(0) out = model.generate(**inputs, max_new_tokens=300) print(tokenizer.decode(out[0], skip_special_tokens=True)) ``` Results: ``` User: Hello can you provide me with top-3 cool places to visit in Paris? Assistant: Absolutely, here are my top-3 recommendations for must-see places in Paris: 1. The Eiffel Tower: An icon of Paris, this wrought-iron lattice tower is a global cultural icon of France and is among the most recognizable structures in the world. Climbing up to the top offers breathtaking views of the city. 2. The Louvre Museum: Home to thousands of works of art, the Louvre is the world's largest art museum and a historic monument in Paris. Must-see pieces include the Mona Lisa, the Winged Victory of Samothrace, and the Venus de Milo. 3. Notre-Dame Cathedral: This cathedral is a masterpiece of French Gothic architecture and is famous for its intricate stone carvings, beautiful stained glass, and its iconic twin towers. Be sure to spend some time exploring its history and learning about the fascinating restoration efforts post the 2019 fire. I hope you find these recommendations helpful and that they make for an enjoyable and memorable trip to Paris. Safe travels! ```
{"tags": ["finetuned", "quantized", "4-bit", "AWQ", "text-generation", "mixtral"], "model_name": "WizardLM-2-8x22B-AWQ", "base_model": "microsoft/WizardLM-2-8x22B", "inference": false, "pipeline_tag": "text-generation", "quantized_by": "MaziyarPanahi"}
MaziyarPanahi/WizardLM-2-8x22B-AWQ
null
[ "transformers", "safetensors", "mixtral", "text-generation", "finetuned", "quantized", "4-bit", "AWQ", "base_model:microsoft/WizardLM-2-8x22B", "autotrain_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T11:06:56+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #finetuned #quantized #4-bit #AWQ #base_model-microsoft/WizardLM-2-8x22B #autotrain_compatible #text-generation-inference #region-us
# Description MaziyarPanahi/home-AWQ is a quantized (AWQ) version of microsoft/WizardLM-2-8x22B ## How to use ### Install the necessary packages ### Example Python code Results:
[ "# Description\nMaziyarPanahi/home-AWQ is a quantized (AWQ) version of microsoft/WizardLM-2-8x22B", "## How to use", "### Install the necessary packages", "### Example Python code\n\n\n\n\nResults:" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #finetuned #quantized #4-bit #AWQ #base_model-microsoft/WizardLM-2-8x22B #autotrain_compatible #text-generation-inference #region-us \n", "# Description\nMaziyarPanahi/home-AWQ is a quantized (AWQ) version of microsoft/WizardLM-2-8x22B", "## How to use", "### Install the necessary packages", "### Example Python code\n\n\n\n\nResults:" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/TinyLlama-TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/TinyLlama-TinyLlama-1.1B-intermediate-step-1195k-token-2.5T-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-16T11:08:53+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-2_summarize This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.7266 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 3 - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9986 | 0.03 | 25 | 1.8320 | | 1.8726 | 0.06 | 50 | 1.7854 | | 1.7545 | 0.08 | 75 | 1.7727 | | 1.8384 | 0.11 | 100 | 1.7614 | | 1.7279 | 0.14 | 125 | 1.7561 | | 1.7348 | 0.17 | 150 | 1.7503 | | 1.8037 | 0.2 | 175 | 1.7442 | | 1.7602 | 0.23 | 200 | 1.7418 | | 1.8112 | 0.25 | 225 | 1.7372 | | 1.7011 | 0.28 | 250 | 1.7339 | | 1.6925 | 0.31 | 275 | 1.7312 | | 1.6696 | 0.34 | 300 | 1.7295 | | 1.7461 | 0.37 | 325 | 1.7275 | | 1.6521 | 0.4 | 350 | 1.7270 | | 1.6636 | 0.42 | 375 | 1.7267 | | 1.7048 | 0.45 | 400 | 1.7266 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "microsoft/phi-2", "model-index": [{"name": "phi-2_summarize", "results": []}]}
deboramachadoandrade/phi-2_summarize
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-04-16T11:11:04+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/phi-2 #license-mit #region-us
phi-2\_summarize ================ This model is a fine-tuned version of microsoft/phi-2 on the generator dataset. It achieves the following results on the evaluation set: * Loss: 1.7266 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 1 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 3 * training\_steps: 400 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 3\n* training\\_steps: 400", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/phi-2 #license-mit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 3\n* training\\_steps: 400", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
yunus-emre/ft-Llama2-with-stack-exchange-paired
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-16T11:12:48+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ft-Llama2-with-stack-exchange-paired This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the umarigan/falcon_feedback_instraction_Turkish dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "ft-Llama2-with-stack-exchange-paired", "results": []}]}
yunus-emre/sft
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-hf", "region:us" ]
null
2024-04-16T11:12:57+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-hf #region-us
# ft-Llama2-with-stack-exchange-paired This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the umarigan/falcon_feedback_instraction_Turkish dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 50 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# ft-Llama2-with-stack-exchange-paired\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the umarigan/falcon_feedback_instraction_Turkish dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 50\n- training_steps: 100\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-hf #region-us \n", "# ft-Llama2-with-stack-exchange-paired\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the umarigan/falcon_feedback_instraction_Turkish dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 1\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 50\n- training_steps: 100\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
vedant9034/H-model
null
[ "transformers", "safetensors", "marian", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T11:13:22+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #marian #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #marian #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
zizoNa/zibib
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-16T11:13:35+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_hh_shp3_dpo9 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.1377 - Rewards/chosen: -9.3962 - Rewards/rejected: -11.2598 - Rewards/accuracies: 0.5800 - Rewards/margins: 1.8636 - Logps/rejected: -245.8684 - Logps/chosen: -228.6076 - Logits/rejected: -0.5115 - Logits/chosen: -0.4911 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.0303 | 2.67 | 100 | 2.0259 | -5.7696 | -7.0838 | 0.5900 | 1.3141 | -241.2283 | -224.5780 | -0.4920 | -0.4631 | | 0.0777 | 5.33 | 200 | 2.9429 | 4.7991 | 4.3667 | 0.5400 | 0.4324 | -228.5056 | -212.8350 | -0.5965 | -0.6045 | | 0.0403 | 8.0 | 300 | 3.7747 | -6.4630 | -6.9732 | 0.5200 | 0.5101 | -241.1055 | -225.3485 | -0.4940 | -0.4878 | | 0.0 | 10.67 | 400 | 4.2141 | -9.5948 | -11.4745 | 0.5900 | 1.8797 | -246.1069 | -228.8282 | -0.5120 | -0.4918 | | 0.0 | 13.33 | 500 | 4.1730 | -9.3741 | -11.2115 | 0.5900 | 1.8374 | -245.8147 | -228.5830 | -0.5108 | -0.4906 | | 0.0 | 16.0 | 600 | 4.1578 | -9.3777 | -11.2055 | 0.5800 | 1.8278 | -245.8080 | -228.5870 | -0.5104 | -0.4904 | | 0.0 | 18.67 | 700 | 4.1186 | -9.3460 | -11.2703 | 0.5900 | 1.9244 | -245.8801 | -228.5517 | -0.5109 | -0.4907 | | 0.0 | 21.33 | 800 | 4.1078 | -9.3558 | -11.2366 | 0.5800 | 1.8808 | -245.8426 | -228.5626 | -0.5111 | -0.4908 | | 0.0 | 24.0 | 900 | 4.1290 | -9.3834 | -11.2428 | 0.5800 | 1.8594 | -245.8495 | -228.5933 | -0.5115 | -0.4912 | | 0.0 | 26.67 | 1000 | 4.1377 | -9.3962 | -11.2598 | 0.5800 | 1.8636 | -245.8684 | -228.6076 | -0.5115 | -0.4911 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_shp3_dpo9", "results": []}]}
guoyu-zhang/model_hh_shp3_dpo9
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "region:us" ]
null
2024-04-16T11:14:23+00:00
[]
[]
TAGS #peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
model\_hh\_shp3\_dpo9 ===================== This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 4.1377 * Rewards/chosen: -9.3962 * Rewards/rejected: -11.2598 * Rewards/accuracies: 0.5800 * Rewards/margins: 1.8636 * Logps/rejected: -245.8684 * Logps/chosen: -228.6076 * Logits/rejected: -0.5115 * Logits/chosen: -0.4911 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 4 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 100 * training\_steps: 1000 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.1 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_core_tata-seqsight_16384_512_34M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_prom_prom_core_tata](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_core_tata) dataset. It achieves the following results on the evaluation set: - Loss: 2.0198 - F1 Score: 0.7080 - Accuracy: 0.7080 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1536 - eval_batch_size: 1536 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5635 | 50.0 | 200 | 0.6549 | 0.7031 | 0.7031 | | 0.2871 | 100.0 | 400 | 0.8640 | 0.7126 | 0.7129 | | 0.1702 | 150.0 | 600 | 1.0519 | 0.6874 | 0.6900 | | 0.1165 | 200.0 | 800 | 1.2393 | 0.6998 | 0.6998 | | 0.0916 | 250.0 | 1000 | 1.3339 | 0.7034 | 0.7047 | | 0.0784 | 300.0 | 1200 | 1.3117 | 0.7077 | 0.7080 | | 0.0648 | 350.0 | 1400 | 1.4027 | 0.7011 | 0.7015 | | 0.0566 | 400.0 | 1600 | 1.3880 | 0.7080 | 0.7080 | | 0.05 | 450.0 | 1800 | 1.5368 | 0.7063 | 0.7064 | | 0.0458 | 500.0 | 2000 | 1.6096 | 0.7178 | 0.7178 | | 0.0406 | 550.0 | 2200 | 1.5778 | 0.7110 | 0.7113 | | 0.0354 | 600.0 | 2400 | 1.6177 | 0.6965 | 0.6966 | | 0.0353 | 650.0 | 2600 | 1.7210 | 0.7047 | 0.7047 | | 0.0326 | 700.0 | 2800 | 1.7257 | 0.7078 | 0.7080 | | 0.0307 | 750.0 | 3000 | 1.9139 | 0.7092 | 0.7096 | | 0.0286 | 800.0 | 3200 | 1.8367 | 0.7177 | 0.7178 | | 0.0266 | 850.0 | 3400 | 1.7451 | 0.7096 | 0.7096 | | 0.0249 | 900.0 | 3600 | 1.6792 | 0.7145 | 0.7145 | | 0.0241 | 950.0 | 3800 | 1.9410 | 0.7256 | 0.7259 | | 0.0232 | 1000.0 | 4000 | 1.8502 | 0.7102 | 0.7113 | | 0.0216 | 1050.0 | 4200 | 1.9516 | 0.7111 | 0.7113 | | 0.0209 | 1100.0 | 4400 | 1.8688 | 0.7161 | 0.7162 | | 0.0209 | 1150.0 | 4600 | 1.8619 | 0.6982 | 0.6982 | | 0.0206 | 1200.0 | 4800 | 1.8354 | 0.7129 | 0.7129 | | 0.0192 | 1250.0 | 5000 | 2.0636 | 0.7209 | 0.7210 | | 0.0174 | 1300.0 | 5200 | 1.9511 | 0.7160 | 0.7162 | | 0.019 | 1350.0 | 5400 | 1.9596 | 0.7145 | 0.7145 | | 0.0182 | 1400.0 | 5600 | 1.8634 | 0.7142 | 0.7145 | | 0.0178 | 1450.0 | 5800 | 1.8704 | 0.7193 | 0.7194 | | 0.0171 | 1500.0 | 6000 | 1.8839 | 0.7111 | 0.7113 | | 0.0157 | 1550.0 | 6200 | 2.0428 | 0.7176 | 0.7178 | | 0.0165 | 1600.0 | 6400 | 2.0643 | 0.7124 | 0.7129 | | 0.0158 | 1650.0 | 6600 | 1.9762 | 0.7174 | 0.7178 | | 0.0154 | 1700.0 | 6800 | 1.9272 | 0.7096 | 0.7096 | | 0.0145 | 1750.0 | 7000 | 1.9679 | 0.7161 | 0.7162 | | 0.0149 | 1800.0 | 7200 | 2.0367 | 0.7095 | 0.7096 | | 0.0148 | 1850.0 | 7400 | 1.8743 | 0.7145 | 0.7145 | | 0.0155 | 1900.0 | 7600 | 2.0207 | 0.7057 | 0.7064 | | 0.0143 | 1950.0 | 7800 | 2.0245 | 0.7152 | 0.7162 | | 0.0142 | 2000.0 | 8000 | 1.8480 | 0.7156 | 0.7162 | | 0.0132 | 2050.0 | 8200 | 1.8807 | 0.7237 | 0.7243 | | 0.0134 | 2100.0 | 8400 | 1.8346 | 0.7176 | 0.7178 | | 0.0121 | 2150.0 | 8600 | 2.0868 | 0.7225 | 0.7227 | | 0.0109 | 2200.0 | 8800 | 2.1062 | 0.7127 | 0.7129 | | 0.0118 | 2250.0 | 9000 | 2.0071 | 0.7144 | 0.7145 | | 0.0122 | 2300.0 | 9200 | 1.9888 | 0.7127 | 0.7129 | | 0.0117 | 2350.0 | 9400 | 2.0530 | 0.7161 | 0.7162 | | 0.0114 | 2400.0 | 9600 | 2.0494 | 0.7160 | 0.7162 | | 0.0121 | 2450.0 | 9800 | 2.0138 | 0.7177 | 0.7178 | | 0.0108 | 2500.0 | 10000 | 2.0286 | 0.7177 | 0.7178 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_prom_prom_core_tata-seqsight_16384_512_34M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_core_tata-seqsight_16384_512_34M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_34M", "region:us" ]
null
2024-04-16T11:14:39+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
GUE\_prom\_prom\_core\_tata-seqsight\_16384\_512\_34M-L32\_all ============================================================== This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_prom\_prom\_core\_tata dataset. It achieves the following results on the evaluation set: * Loss: 2.0198 * F1 Score: 0.7080 * Accuracy: 0.7080 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 1536 * eval\_batch\_size: 1536 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 1536\n* eval\\_batch\\_size: 1536\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ft_wav2vec2_base_six_500 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2609 - Wer: 45.1739 - Cer: 21.0426 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | 4.3484 | 11.36 | 500 | 2.9679 | 98.7617 | 98.9968 | | 2.4169 | 22.73 | 1000 | 1.2648 | 64.7328 | 28.2059 | | 0.6152 | 34.09 | 1500 | 1.1641 | 47.3113 | 21.7501 | | 0.2786 | 45.45 | 2000 | 1.2609 | 45.1739 | 21.0426 | ### Framework versions - Transformers 4.39.3 - Pytorch 1.12.1+cu116 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "ft_wav2vec2_base_six_500", "results": []}]}
ImanNalia/ft_wav2vec2_base_six_500
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "base_model:facebook/wav2vec2-base", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-16T11:14:51+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us
ft\_wav2vec2\_base\_six\_500 ============================ This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.2609 * Wer: 45.1739 * Cer: 21.0426 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 32 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2000 * num\_epochs: 50 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 1.12.1+cu116 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 1.12.1+cu116\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2000\n* num\\_epochs: 50", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 1.12.1+cu116\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
zizoNa/nami
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-16T11:17:05+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistralv1_spectral_r8_4e5_e3 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistralv1_spectral_r8_4e5_e3", "results": []}]}
fangzhaoz/mistralv1_spectral_r8_4e5_e3
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-16T11:17:08+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
# mistralv1_spectral_r8_4e5_e3 This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results ### Framework versions - PEFT 0.9.0 - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# mistralv1_spectral_r8_4e5_e3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 4e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n", "# mistralv1_spectral_r8_4e5_e3\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 4e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 0\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3", "### Training results", "### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
peft
## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
{"library_name": "peft"}
adithyaMadhu/fine-tune-sop-sb
null
[ "peft", "region:us" ]
null
2024-04-16T11:19:40+00:00
[]
[]
TAGS #peft #region-us
## Training procedure The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
[ "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.4.0" ]
[ "TAGS\n#peft #region-us \n", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.4.0" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Reza-Barati/distilbert-base-uncased-finetuned-for-Extracting-IoCs-cybersecurity This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0478 - Validation Loss: 0.0753 - Train Precision: 0.9023 - Train Recall: 0.9443 - Train F1: 0.9228 - Train Accuracy: 0.9774 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 33345, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.1491 | 0.1030 | 0.8617 | 0.9194 | 0.8896 | 0.9671 | 0 | | 0.0899 | 0.0882 | 0.8839 | 0.9234 | 0.9032 | 0.9720 | 1 | | 0.0699 | 0.0791 | 0.8955 | 0.9414 | 0.9179 | 0.9756 | 2 | | 0.0572 | 0.0749 | 0.8950 | 0.9438 | 0.9188 | 0.9762 | 3 | | 0.0478 | 0.0753 | 0.9023 | 0.9443 | 0.9228 | 0.9774 | 4 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "Reza-Barati/distilbert-base-uncased-finetuned-for-Extracting-IoCs-cybersecurity", "results": []}]}
Reza-Barati/distilbert-base-uncased-finetuned-for-Extracting-IoCs-cybersecurity
null
[ "transformers", "tf", "tensorboard", "distilbert", "token-classification", "generated_from_keras_callback", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T11:19:41+00:00
[]
[]
TAGS #transformers #tf #tensorboard #distilbert #token-classification #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Reza-Barati/distilbert-base-uncased-finetuned-for-Extracting-IoCs-cybersecurity =============================================================================== This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.0478 * Validation Loss: 0.0753 * Train Precision: 0.9023 * Train Recall: 0.9443 * Train F1: 0.9228 * Train Accuracy: 0.9774 * Epoch: 4 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 5e-05, 'decay\_steps': 33345, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.01} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.38.2 * TensorFlow 2.15.0 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 5e-05, 'decay\\_steps': 33345, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tf #tensorboard #distilbert #token-classification #generated_from_keras_callback #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 5e-05, 'decay\\_steps': 33345, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
null
# jina-website-1-0-16-BAAI_bge-small-en-v1.5-50_9062874564 ## Model Description jina-website-1-0-16-BAAI_bge-small-en-v1.5-50_9062874564 is a fine-tuned version of BAAI/bge-small-en-v1.5 designed for a specific domain. ## Use Case This model is designed to support various applications in natural language processing and understanding. ## Associated Dataset This the dataset for this model can be found [**here**](https://huggingface.co/datasets/florianhoenicke/jina-website-1-0-16-BAAI_bge-small-en-v1.5-50_9062874564). ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from transformers import AutoModel, AutoTokenizer llm_name = "jina-website-1-0-16-BAAI_bge-small-en-v1.5-50_9062874564" tokenizer = AutoTokenizer.from_pretrained(llm_name) model = AutoModel.from_pretrained(llm_name) tokens = tokenizer("Your text here", return_tensors="pt") embedding = model(**tokens) ```
{}
florianhoenicke/jina-website-1-0-16-BAAI_bge-small-en-v1.5-50_9062874564
null
[ "region:us" ]
null
2024-04-16T11:21:15+00:00
[]
[]
TAGS #region-us
# jina-website-1-0-16-BAAI_bge-small-en-v1.5-50_9062874564 ## Model Description jina-website-1-0-16-BAAI_bge-small-en-v1.5-50_9062874564 is a fine-tuned version of BAAI/bge-small-en-v1.5 designed for a specific domain. ## Use Case This model is designed to support various applications in natural language processing and understanding. ## Associated Dataset This the dataset for this model can be found here. ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
[ "# jina-website-1-0-16-BAAI_bge-small-en-v1.5-50_9062874564", "## Model Description\n\njina-website-1-0-16-BAAI_bge-small-en-v1.5-50_9062874564 is a fine-tuned version of BAAI/bge-small-en-v1.5 designed for a specific domain.", "## Use Case\nThis model is designed to support various applications in natural language processing and understanding.", "## Associated Dataset\n\nThis the dataset for this model can be found here.", "## How to Use\n\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:" ]
[ "TAGS\n#region-us \n", "# jina-website-1-0-16-BAAI_bge-small-en-v1.5-50_9062874564", "## Model Description\n\njina-website-1-0-16-BAAI_bge-small-en-v1.5-50_9062874564 is a fine-tuned version of BAAI/bge-small-en-v1.5 designed for a specific domain.", "## Use Case\nThis model is designed to support various applications in natural language processing and understanding.", "## Associated Dataset\n\nThis the dataset for this model can be found here.", "## How to Use\n\nThis model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_prom_prom_300_all-seqsight_16384_512_34M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_prom_prom_300_all](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_prom_prom_300_all) dataset. It achieves the following results on the evaluation set: - Loss: 0.6210 - F1 Score: 0.8028 - Accuracy: 0.8032 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.5739 | 8.33 | 200 | 0.5152 | 0.7465 | 0.7478 | | 0.484 | 16.67 | 400 | 0.4931 | 0.7681 | 0.7682 | | 0.448 | 25.0 | 600 | 0.4955 | 0.7704 | 0.7704 | | 0.4092 | 33.33 | 800 | 0.4630 | 0.7868 | 0.7868 | | 0.3708 | 41.67 | 1000 | 0.4845 | 0.7897 | 0.7904 | | 0.3422 | 50.0 | 1200 | 0.4727 | 0.7956 | 0.7966 | | 0.3165 | 58.33 | 1400 | 0.4508 | 0.8074 | 0.8074 | | 0.297 | 66.67 | 1600 | 0.4897 | 0.7995 | 0.8002 | | 0.2807 | 75.0 | 1800 | 0.4949 | 0.8048 | 0.8051 | | 0.2716 | 83.33 | 2000 | 0.4795 | 0.8095 | 0.8098 | | 0.2588 | 91.67 | 2200 | 0.4944 | 0.8072 | 0.8076 | | 0.2482 | 100.0 | 2400 | 0.4825 | 0.8090 | 0.8095 | | 0.2397 | 108.33 | 2600 | 0.5244 | 0.8073 | 0.8078 | | 0.2314 | 116.67 | 2800 | 0.5132 | 0.8099 | 0.8101 | | 0.2249 | 125.0 | 3000 | 0.5320 | 0.8055 | 0.8063 | | 0.2191 | 133.33 | 3200 | 0.5483 | 0.8055 | 0.8066 | | 0.2156 | 141.67 | 3400 | 0.5296 | 0.8144 | 0.8149 | | 0.2056 | 150.0 | 3600 | 0.5429 | 0.8080 | 0.8086 | | 0.2016 | 158.33 | 3800 | 0.5506 | 0.8132 | 0.8135 | | 0.1965 | 166.67 | 4000 | 0.5529 | 0.8122 | 0.8125 | | 0.1933 | 175.0 | 4200 | 0.5560 | 0.8135 | 0.8139 | | 0.1894 | 183.33 | 4400 | 0.5779 | 0.8132 | 0.8137 | | 0.1845 | 191.67 | 4600 | 0.5639 | 0.8132 | 0.8135 | | 0.181 | 200.0 | 4800 | 0.5868 | 0.8143 | 0.8147 | | 0.1781 | 208.33 | 5000 | 0.6098 | 0.8140 | 0.8144 | | 0.1774 | 216.67 | 5200 | 0.5936 | 0.8134 | 0.8139 | | 0.1747 | 225.0 | 5400 | 0.5638 | 0.8170 | 0.8172 | | 0.1727 | 233.33 | 5600 | 0.6162 | 0.8079 | 0.8086 | | 0.1683 | 241.67 | 5800 | 0.6132 | 0.8136 | 0.8139 | | 0.1668 | 250.0 | 6000 | 0.6213 | 0.8100 | 0.8106 | | 0.1645 | 258.33 | 6200 | 0.5785 | 0.8185 | 0.8187 | | 0.1634 | 266.67 | 6400 | 0.6438 | 0.8059 | 0.8066 | | 0.1622 | 275.0 | 6600 | 0.6073 | 0.8145 | 0.8149 | | 0.1586 | 283.33 | 6800 | 0.6211 | 0.8146 | 0.8150 | | 0.1568 | 291.67 | 7000 | 0.5975 | 0.8176 | 0.8179 | | 0.1563 | 300.0 | 7200 | 0.6308 | 0.8114 | 0.8122 | | 0.1547 | 308.33 | 7400 | 0.6082 | 0.8170 | 0.8174 | | 0.1529 | 316.67 | 7600 | 0.6459 | 0.8112 | 0.8118 | | 0.1529 | 325.0 | 7800 | 0.6191 | 0.8173 | 0.8176 | | 0.1505 | 333.33 | 8000 | 0.6348 | 0.8131 | 0.8135 | | 0.1497 | 341.67 | 8200 | 0.6296 | 0.8161 | 0.8164 | | 0.1502 | 350.0 | 8400 | 0.6333 | 0.8123 | 0.8127 | | 0.1469 | 358.33 | 8600 | 0.6308 | 0.8174 | 0.8177 | | 0.1453 | 366.67 | 8800 | 0.6468 | 0.8127 | 0.8132 | | 0.1453 | 375.0 | 9000 | 0.6389 | 0.8131 | 0.8135 | | 0.1453 | 383.33 | 9200 | 0.6359 | 0.8121 | 0.8125 | | 0.1443 | 391.67 | 9400 | 0.6442 | 0.8123 | 0.8128 | | 0.1442 | 400.0 | 9600 | 0.6459 | 0.8139 | 0.8144 | | 0.1425 | 408.33 | 9800 | 0.6472 | 0.8138 | 0.8142 | | 0.1436 | 416.67 | 10000 | 0.6445 | 0.8131 | 0.8135 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_prom_prom_300_all-seqsight_16384_512_34M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_prom_prom_300_all-seqsight_16384_512_34M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_34M", "region:us" ]
null
2024-04-16T11:21:57+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
GUE\_prom\_prom\_300\_all-seqsight\_16384\_512\_34M-L32\_all ============================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_prom\_prom\_300\_all dataset. It achieves the following results on the evaluation set: * Loss: 0.6210 * F1 Score: 0.8028 * Accuracy: 0.8032 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
fangzhaoz/mistralv1_spectral_r8_4e5_e3_merged
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T11:21:58+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
heyllm234/sc43
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T11:22:03+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
token-classification
transformers
## Model Specification - Model: XLM-RoBERTa (base-sized model) - Training Data: - Combined Afrikaans, Hebrew, Bulgarian, Vietnamese, Norwegian, Urdu, & Czech corpora (Top 7 Languages) - Training Details: - Base configurations with a minor adjustment in learning rate (4.5e-5) ## Evaluation - Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set) - Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 75.18\% Accuracy) ## POS Tags - ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB
{"language": ["tl"], "datasets": ["universal_dependencies"], "metrics": ["f1"], "pipeline_tag": "token-classification"}
iceman2434/xlm-roberta-base-ft-udpos213-top7lang
null
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "tl", "dataset:universal_dependencies", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T11:22:28+00:00
[]
[ "tl" ]
TAGS #transformers #pytorch #xlm-roberta #token-classification #tl #dataset-universal_dependencies #autotrain_compatible #endpoints_compatible #region-us
## Model Specification - Model: XLM-RoBERTa (base-sized model) - Training Data: - Combined Afrikaans, Hebrew, Bulgarian, Vietnamese, Norwegian, Urdu, & Czech corpora (Top 7 Languages) - Training Details: - Base configurations with a minor adjustment in learning rate (4.5e-5) ## Evaluation - Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set) - Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 75.18\% Accuracy) ## POS Tags - ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB
[ "## Model Specification\n- Model: XLM-RoBERTa (base-sized model)\n- Training Data:\n - Combined Afrikaans, Hebrew, Bulgarian, Vietnamese, Norwegian, Urdu, & Czech corpora (Top 7 Languages)\n- Training Details:\n - Base configurations with a minor adjustment in learning rate (4.5e-5)", "## Evaluation\n- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)\n- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 75.18\\% Accuracy)", "## POS Tags\n- ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #token-classification #tl #dataset-universal_dependencies #autotrain_compatible #endpoints_compatible #region-us \n", "## Model Specification\n- Model: XLM-RoBERTa (base-sized model)\n- Training Data:\n - Combined Afrikaans, Hebrew, Bulgarian, Vietnamese, Norwegian, Urdu, & Czech corpora (Top 7 Languages)\n- Training Details:\n - Base configurations with a minor adjustment in learning rate (4.5e-5)", "## Evaluation\n- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)\n- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 75.18\\% Accuracy)", "## POS Tags\n- ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB" ]
token-classification
transformers
## Model Specification - Model: XLM-RoBERTa (base-sized model) - Training Data: - Combined Afrikaans, Hebrew, Bulgarian, Vietnamese, Norwegian, & Urdu corpora (Top 6 Languages) - Training Details: - Base configurations with a minor adjustment in learning rate (4.5e-5) ## Evaluation - Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set) - Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 77.67\% Accuracy) ## POS Tags - ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB
{"language": ["tl"], "datasets": ["universal_dependencies"], "metrics": ["f1"], "pipeline_tag": "token-classification"}
iceman2434/xlm-roberta-base-ft-udpos213-top6lang
null
[ "transformers", "pytorch", "xlm-roberta", "token-classification", "tl", "dataset:universal_dependencies", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T11:25:27+00:00
[]
[ "tl" ]
TAGS #transformers #pytorch #xlm-roberta #token-classification #tl #dataset-universal_dependencies #autotrain_compatible #endpoints_compatible #region-us
## Model Specification - Model: XLM-RoBERTa (base-sized model) - Training Data: - Combined Afrikaans, Hebrew, Bulgarian, Vietnamese, Norwegian, & Urdu corpora (Top 6 Languages) - Training Details: - Base configurations with a minor adjustment in learning rate (4.5e-5) ## Evaluation - Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set) - Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 77.67\% Accuracy) ## POS Tags - ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB
[ "## Model Specification\n- Model: XLM-RoBERTa (base-sized model)\n- Training Data:\n - Combined Afrikaans, Hebrew, Bulgarian, Vietnamese, Norwegian, & Urdu corpora (Top 6 Languages)\n- Training Details:\n - Base configurations with a minor adjustment in learning rate (4.5e-5)", "## Evaluation\n- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)\n- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 77.67\\% Accuracy)", "## POS Tags\n- ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB" ]
[ "TAGS\n#transformers #pytorch #xlm-roberta #token-classification #tl #dataset-universal_dependencies #autotrain_compatible #endpoints_compatible #region-us \n", "## Model Specification\n- Model: XLM-RoBERTa (base-sized model)\n- Training Data:\n - Combined Afrikaans, Hebrew, Bulgarian, Vietnamese, Norwegian, & Urdu corpora (Top 6 Languages)\n- Training Details:\n - Base configurations with a minor adjustment in learning rate (4.5e-5)", "## Evaluation\n- Evaluation Dataset: Universal Dependencies Tagalog Ugnayan (Testing Set)\n- Tested in a zero-shot cross-lingual scenario on a Universal Dependencies Tagalog Ugnayan testing dataset (with 77.67\\% Accuracy)", "## POS Tags\n- ADJ – ADP – ADV – CCONJ – DET – INTJ – NOUN – NUM – PART – PRON – PROPN – PUNCT – SCONJ – VERB" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
SKNahin/nllb-bn-transliterator
null
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T11:25:28+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #m2m_100 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #m2m_100 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Ni3-7pute/codegen-350M-mono-python-18k-claasp
null
[ "transformers", "safetensors", "codegen", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-16T11:27:28+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #codegen #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #codegen #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
Lora generating images in the Ambient occlusion style. sd version : SD 1.5 You can use controlnet like Canny, lineart, Softedge, etc. to create Depth with just lineart. Add 'ao, 3d' to the prompt. The number after the Lora filename refers to the number of the merged lora. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f2b20aeb9e8a5f05cf9a9d/mOIIcodjXega8JCwkiI6C.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62f2b20aeb9e8a5f05cf9a9d/GGAMITm6vx0me3AZz0tP0.png) <video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/62f2b20aeb9e8a5f05cf9a9d/DQer16rHvavUPSxGE2ms8.mp4"></video>
{}
toyxyz/Line2ao_sd1.5
null
[ "region:us" ]
null
2024-04-16T11:29:27+00:00
[]
[]
TAGS #region-us
Lora generating images in the Ambient occlusion style. sd version : SD 1.5 You can use controlnet like Canny, lineart, Softedge, etc. to create Depth with just lineart. Add 'ao, 3d' to the prompt. The number after the Lora filename refers to the number of the merged lora. !image/png !image/png <video controls autoplay src="URL
[]
[ "TAGS\n#region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
hi000000/insta_chat_lama2-koen
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T11:32:09+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Jooonhan/mistral_7b_instruct_summarization_v4
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-16T11:32:36+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # GUE_mouse_0-seqsight_16384_512_34M-L32_all This model is a fine-tuned version of [mahdibaghbanzadeh/seqsight_16384_512_34M](https://huggingface.co/mahdibaghbanzadeh/seqsight_16384_512_34M) on the [mahdibaghbanzadeh/GUE_mouse_0](https://huggingface.co/datasets/mahdibaghbanzadeh/GUE_mouse_0) dataset. It achieves the following results on the evaluation set: - Loss: 2.1128 - F1 Score: 0.6221 - Accuracy: 0.6222 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2048 - eval_batch_size: 2048 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10000 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Score | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:|:--------:| | 0.6231 | 50.0 | 200 | 0.7563 | 0.6082 | 0.6173 | | 0.3925 | 100.0 | 400 | 0.9459 | 0.6194 | 0.6198 | | 0.2564 | 150.0 | 600 | 1.1758 | 0.6280 | 0.6284 | | 0.187 | 200.0 | 800 | 1.3161 | 0.6269 | 0.6272 | | 0.1526 | 250.0 | 1000 | 1.3553 | 0.6269 | 0.6272 | | 0.1297 | 300.0 | 1200 | 1.4791 | 0.6277 | 0.6284 | | 0.1127 | 350.0 | 1400 | 1.5426 | 0.6345 | 0.6346 | | 0.1032 | 400.0 | 1600 | 1.5698 | 0.6353 | 0.6358 | | 0.0922 | 450.0 | 1800 | 1.6436 | 0.6359 | 0.6358 | | 0.0845 | 500.0 | 2000 | 1.7416 | 0.6284 | 0.6284 | | 0.0781 | 550.0 | 2200 | 1.7402 | 0.6322 | 0.6321 | | 0.0731 | 600.0 | 2400 | 1.7845 | 0.6293 | 0.6296 | | 0.0668 | 650.0 | 2600 | 1.8886 | 0.6233 | 0.6235 | | 0.0636 | 700.0 | 2800 | 1.9114 | 0.6321 | 0.6321 | | 0.0595 | 750.0 | 3000 | 1.9911 | 0.6346 | 0.6346 | | 0.0572 | 800.0 | 3200 | 2.0686 | 0.6357 | 0.6358 | | 0.0542 | 850.0 | 3400 | 2.1804 | 0.6272 | 0.6272 | | 0.0527 | 900.0 | 3600 | 2.1717 | 0.6295 | 0.6296 | | 0.0483 | 950.0 | 3800 | 2.2253 | 0.6415 | 0.6420 | | 0.0465 | 1000.0 | 4000 | 2.0959 | 0.6238 | 0.6247 | | 0.0443 | 1050.0 | 4200 | 2.4143 | 0.6309 | 0.6309 | | 0.0425 | 1100.0 | 4400 | 2.2519 | 0.6245 | 0.6247 | | 0.0428 | 1150.0 | 4600 | 2.3705 | 0.6268 | 0.6272 | | 0.0389 | 1200.0 | 4800 | 2.2623 | 0.6278 | 0.6284 | | 0.0393 | 1250.0 | 5000 | 2.2808 | 0.6254 | 0.6259 | | 0.038 | 1300.0 | 5200 | 2.5504 | 0.6218 | 0.6222 | | 0.0368 | 1350.0 | 5400 | 2.4103 | 0.6111 | 0.6111 | | 0.0338 | 1400.0 | 5600 | 2.5645 | 0.6148 | 0.6148 | | 0.0349 | 1450.0 | 5800 | 2.4365 | 0.6198 | 0.6198 | | 0.033 | 1500.0 | 6000 | 2.4571 | 0.6123 | 0.6123 | | 0.032 | 1550.0 | 6200 | 2.5117 | 0.6111 | 0.6111 | | 0.0302 | 1600.0 | 6400 | 2.5159 | 0.6145 | 0.6148 | | 0.0291 | 1650.0 | 6600 | 2.5450 | 0.6197 | 0.6198 | | 0.0292 | 1700.0 | 6800 | 2.5952 | 0.6240 | 0.6247 | | 0.0292 | 1750.0 | 7000 | 2.3207 | 0.6169 | 0.6173 | | 0.0274 | 1800.0 | 7200 | 2.4385 | 0.6259 | 0.6259 | | 0.0275 | 1850.0 | 7400 | 2.7761 | 0.6165 | 0.6173 | | 0.0265 | 1900.0 | 7600 | 2.4491 | 0.6112 | 0.6111 | | 0.0264 | 1950.0 | 7800 | 2.4501 | 0.6124 | 0.6123 | | 0.0246 | 2000.0 | 8000 | 2.6071 | 0.6146 | 0.6148 | | 0.0257 | 2050.0 | 8200 | 2.4462 | 0.6159 | 0.6160 | | 0.0245 | 2100.0 | 8400 | 2.5857 | 0.6185 | 0.6185 | | 0.0243 | 2150.0 | 8600 | 2.4812 | 0.6148 | 0.6148 | | 0.0241 | 2200.0 | 8800 | 2.5074 | 0.6161 | 0.6160 | | 0.0234 | 2250.0 | 9000 | 2.5333 | 0.6197 | 0.6198 | | 0.0226 | 2300.0 | 9200 | 2.5442 | 0.6169 | 0.6173 | | 0.0219 | 2350.0 | 9400 | 2.5396 | 0.6219 | 0.6222 | | 0.0223 | 2400.0 | 9600 | 2.5336 | 0.6195 | 0.6198 | | 0.023 | 2450.0 | 9800 | 2.4606 | 0.6159 | 0.6160 | | 0.0223 | 2500.0 | 10000 | 2.5010 | 0.6146 | 0.6148 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "mahdibaghbanzadeh/seqsight_16384_512_34M", "model-index": [{"name": "GUE_mouse_0-seqsight_16384_512_34M-L32_all", "results": []}]}
mahdibaghbanzadeh/GUE_mouse_0-seqsight_16384_512_34M-L32_all
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mahdibaghbanzadeh/seqsight_16384_512_34M", "region:us" ]
null
2024-04-16T11:33:36+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us
GUE\_mouse\_0-seqsight\_16384\_512\_34M-L32\_all ================================================ This model is a fine-tuned version of mahdibaghbanzadeh/seqsight\_16384\_512\_34M on the mahdibaghbanzadeh/GUE\_mouse\_0 dataset. It achieves the following results on the evaluation set: * Loss: 2.1128 * F1 Score: 0.6221 * Accuracy: 0.6222 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 2048 * eval\_batch\_size: 2048 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 10000 ### Training results ### Framework versions * PEFT 0.9.0 * Transformers 4.38.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.1 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mahdibaghbanzadeh/seqsight_16384_512_34M #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 2048\n* eval\\_batch\\_size: 2048\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 10000", "### Training results", "### Framework versions\n\n\n* PEFT 0.9.0\n* Transformers 4.38.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.1\n* Tokenizers 0.15.2" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # idefics2-8b-docvqa-finetuned-tutorial This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceM4/idefics2-8b", "model-index": [{"name": "idefics2-8b-docvqa-finetuned-tutorial", "results": []}]}
diack/idefics2-8b-docvqa-finetuned-tutorial
null
[ "safetensors", "generated_from_trainer", "base_model:HuggingFaceM4/idefics2-8b", "license:apache-2.0", "region:us" ]
null
2024-04-16T11:34:00+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us
# idefics2-8b-docvqa-finetuned-tutorial This model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# idefics2-8b-docvqa-finetuned-tutorial\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 2\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us \n", "# idefics2-8b-docvqa-finetuned-tutorial\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 50\n- num_epochs: 2\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]