pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
tokens_length
sequencelengths
1
723
input_texts
sequencelengths
1
1
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nash_dpo_merge_iter_2 This model is a fine-tuned version of [YYYYYYibo/nash_dpo_iter_1](https://huggingface.co/YYYYYYibo/nash_dpo_iter_1) on the updated and the original datasets. It achieves the following results on the evaluation set: - Loss: 0.6368 - Rewards/chosen: -0.5885 - Rewards/rejected: -0.7591 - Rewards/accuracies: 0.6380 - Rewards/margins: 0.1706 - Logps/rejected: -365.7411 - Logps/chosen: -357.2530 - Logits/rejected: -2.1348 - Logits/chosen: -2.2675 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6429 | 0.51 | 100 | 0.6368 | -0.5885 | -0.7591 | 0.6380 | 0.1706 | -365.7411 | -357.2530 | -2.1348 | -2.2675 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo"], "datasets": ["updated", "original"], "base_model": "alignment-handbook/zephyr-7b-sft-full", "model-index": [{"name": "nash_dpo_merge_iter_2", "results": []}]}
YYYYYYibo/nash_dpo_merge_iter_2
null
[ "peft", "safetensors", "mistral", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "dataset:updated", "dataset:original", "base_model:alignment-handbook/zephyr-7b-sft-full", "license:apache-2.0", "region:us" ]
null
2024-05-01T10:16:50+00:00
[]
[]
TAGS #peft #safetensors #mistral #alignment-handbook #generated_from_trainer #trl #dpo #dataset-updated #dataset-original #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #region-us
nash\_dpo\_merge\_iter\_2 ========================= This model is a fine-tuned version of YYYYYYibo/nash\_dpo\_iter\_1 on the updated and the original datasets. It achieves the following results on the evaluation set: * Loss: 0.6368 * Rewards/chosen: -0.5885 * Rewards/rejected: -0.7591 * Rewards/accuracies: 0.6380 * Rewards/margins: 0.1706 * Logps/rejected: -365.7411 * Logps/chosen: -357.2530 * Logits/rejected: -2.1348 * Logits/chosen: -2.2675 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-06 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 4 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 128 * total\_eval\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.7.1 * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #mistral #alignment-handbook #generated_from_trainer #trl #dpo #dataset-updated #dataset-original #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
[ 69, 176, 5, 52 ]
[ "TAGS\n#peft #safetensors #mistral #alignment-handbook #generated_from_trainer #trl #dpo #dataset-updated #dataset-original #base_model-alignment-handbook/zephyr-7b-sft-full #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* total\\_eval\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1### Training results### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
twodigit/Meta-Llama-3-8B-Instruct-koconv2_4327k-sft-lora-40000
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T10:21:04+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
twodigit/Meta-Llama-3-8B-Instruct-koconv2_4327k-sft-lora-10000
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T10:21:04+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
null
# Cali9994/phi-3.8-128k-italian-Q4_K_M-GGUF This model was converted to GGUF format from [`nonsonpratico/phi-3.8-128k-italian`](https://huggingface.co/nonsonpratico/phi-3.8-128k-italian) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/nonsonpratico/phi-3.8-128k-italian) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo Cali9994/phi-3.8-128k-italian-Q4_K_M-GGUF --model phi-3.8-128k-italian.Q4_K_M.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo Cali9994/phi-3.8-128k-italian-Q4_K_M-GGUF --model phi-3.8-128k-italian.Q4_K_M.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi-3.8-128k-italian.Q4_K_M.gguf -n 128 ```
{"language": ["it"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["seeweb/Seeweb-it-292-forLLM"]}
Cali9994/phi-3.8-128k-italian-Q4_K_M-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "it", "dataset:seeweb/Seeweb-it-292-forLLM", "license:apache-2.0", "region:us" ]
null
2024-05-01T10:21:42+00:00
[]
[ "it" ]
TAGS #gguf #llama-cpp #gguf-my-repo #it #dataset-seeweb/Seeweb-it-292-forLLM #license-apache-2.0 #region-us
# Cali9994/phi-3.8-128k-italian-Q4_K_M-GGUF This model was converted to GGUF format from 'nonsonpratico/phi-3.8-128k-italian' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# Cali9994/phi-3.8-128k-italian-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'nonsonpratico/phi-3.8-128k-italian' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #it #dataset-seeweb/Seeweb-it-292-forLLM #license-apache-2.0 #region-us \n", "# Cali9994/phi-3.8-128k-italian-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'nonsonpratico/phi-3.8-128k-italian' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ 53, 89, 52 ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #it #dataset-seeweb/Seeweb-it-292-forLLM #license-apache-2.0 #region-us \n# Cali9994/phi-3.8-128k-italian-Q4_K_M-GGUF\nThis model was converted to GGUF format from 'nonsonpratico/phi-3.8-128k-italian' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
twodigit/Meta-Llama-3-8B-Instruct-koconv2_4327k-sft-lora-80000
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T10:22:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Finetune-test2 This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.8662 - eval_runtime: 15.7392 - eval_samples_per_second: 6.354 - eval_steps_per_second: 1.588 - epoch: 12.0 - step: 675 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 20 - mixed_precision_training: Native AMP ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.0.1+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "Finetune-test2", "results": []}]}
AmaanUsmani/Finetune-test2
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-05-01T10:22:35+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us
# Finetune-test2 This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.8662 - eval_runtime: 15.7392 - eval_samples_per_second: 6.354 - eval_steps_per_second: 1.588 - epoch: 12.0 - step: 675 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 20 - mixed_precision_training: Native AMP ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.0.1+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# Finetune-test2\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on the None dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.8662\n- eval_runtime: 15.7392\n- eval_samples_per_second: 6.354\n- eval_steps_per_second: 1.588\n- epoch: 12.0\n- step: 675", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.0.1+cu118\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n", "# Finetune-test2\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on the None dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.8662\n- eval_runtime: 15.7392\n- eval_samples_per_second: 6.354\n- eval_steps_per_second: 1.588\n- epoch: 12.0\n- step: 675", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.0.1+cu118\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ 52, 115, 7, 9, 9, 4, 133, 52 ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n# Finetune-test2\n\nThis model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on the None dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.8662\n- eval_runtime: 15.7392\n- eval_samples_per_second: 6.354\n- eval_steps_per_second: 1.588\n- epoch: 12.0\n- step: 675## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 4\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 2\n- num_epochs: 20\n- mixed_precision_training: Native AMP### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.0.1+cu118\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
rahulprajapat9/tokenizer
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T10:22:49+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 22, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
astro21/pix2struct-base-coco-v2
null
[ "transformers", "safetensors", "pix2struct", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T10:23:28+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #pix2struct #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #pix2struct #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 44, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #pix2struct #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
twodigit/Meta-Llama-3-8B-Instruct-koconv2_4327k-sft-lora-120000
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T10:24:02+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** srbdtwentyfour - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-Instruct-bnb-4bit"}
srbdtwentyfour/mystery-llama-3-8b
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-Instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T10:26:41+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: srbdtwentyfour - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: srbdtwentyfour\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: srbdtwentyfour\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 67, 87 ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-Instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: srbdtwentyfour\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
gen-bi/llama-2-ko-juno-7b
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-01T10:28:12+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 51, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
rainerberger/planetn6
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T10:29:05+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 44, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: Chhabi/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
Chhabi/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-05-01T10:29:19+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: Chhabi/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: Chhabi/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: Chhabi/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ 35, 199 ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: Chhabi/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="ArnavModanwal/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
ArnavModanwal/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-05-01T10:29:22+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ 35, 33 ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
RefalMachine/ruadapt_llama3_part1-2_vo_3e4_bs256
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T10:31:34+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 44, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** davanstrien - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
davanstrien/LLama-3-dataset-tldr-gguf
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T10:34:36+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: davanstrien - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: davanstrien\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: davanstrien\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 61, 81 ]
[ "TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: davanstrien\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
reevan/gemma_kan_rom_16bit
null
[ "transformers", "pytorch", "gemma", "text-generation", "unsloth", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T10:34:38+00:00
[ "1910.09700" ]
[]
TAGS #transformers #pytorch #gemma #text-generation #unsloth #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #pytorch #gemma #text-generation #unsloth #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 48, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #pytorch #gemma #text-generation #unsloth #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ArnavModanwal/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.56 +/- 2.71", "name": "mean_reward", "verified": false}]}]}]}
ArnavModanwal/Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-05-01T10:36:05+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ 26, 31 ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion-turkish19 This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2663 - Accuracy: 0.9333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 71 | 0.2038 | 0.9524 | | No log | 2.0 | 142 | 0.2325 | 0.9333 | | No log | 3.0 | 213 | 0.2663 | 0.9333 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "dbmdz/bert-base-turkish-cased", "model-index": [{"name": "emotion-turkish19", "results": []}]}
asude55/emotion-turkish19
null
[ "transformers", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:dbmdz/bert-base-turkish-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T10:36:20+00:00
[]
[]
TAGS #transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-dbmdz/bert-base-turkish-cased #license-mit #autotrain_compatible #endpoints_compatible #region-us
emotion-turkish19 ================= This model is a fine-tuned version of dbmdz/bert-base-turkish-cased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.2663 * Accuracy: 0.9333 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 64 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-dbmdz/bert-base-turkish-cased #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 54, 101, 5, 44 ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #generated_from_trainer #base_model-dbmdz/bert-base-turkish-cased #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
steve1989/fingpt-SA-bnb-4bits-finedtuned-financialphrasebank
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T10:36:29+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 44, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
mlx
# GreenBitAI/01-Yi-9B-layer-mix-bpw-3.0-mlx This quantized low-bit model was converted to MLX format from [`GreenBitAI/01-Yi-9B-layer-mix-bpw-3.0`](). Refer to the [original model card](https://huggingface.co/GreenBitAI/01-Yi-9B-layer-mix-bpw-3.0) for more details on the model. ## Use with mlx ```bash pip install gbx-lm ``` ```python from gbx_lm import load, generate model, tokenizer = load("GreenBitAI/01-Yi-9B-layer-mix-bpw-3.0-mlx") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"license": "apache-2.0", "tags": ["mlx"]}
GreenBitAI/01-Yi-9B-layer-mix-bpw-3.0-mlx
null
[ "mlx", "safetensors", "llama", "license:apache-2.0", "region:us" ]
null
2024-05-01T10:37:04+00:00
[]
[]
TAGS #mlx #safetensors #llama #license-apache-2.0 #region-us
# GreenBitAI/01-Yi-9B-layer-mix-bpw-3.0-mlx This quantized low-bit model was converted to MLX format from ['GreenBitAI/01-Yi-9B-layer-mix-bpw-3.0'](). Refer to the original model card for more details on the model. ## Use with mlx
[ "# GreenBitAI/01-Yi-9B-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/01-Yi-9B-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #llama #license-apache-2.0 #region-us \n", "# GreenBitAI/01-Yi-9B-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/01-Yi-9B-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ 23, 80, 6 ]
[ "TAGS\n#mlx #safetensors #llama #license-apache-2.0 #region-us \n# GreenBitAI/01-Yi-9B-layer-mix-bpw-3.0-mlx\nThis quantized low-bit model was converted to MLX format from ['GreenBitAI/01-Yi-9B-layer-mix-bpw-3.0']().\nRefer to the original model card for more details on the model.## Use with mlx" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
sanchit42/Mistral-7b-4bit-finetune
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T10:37:25+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 47, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-classification
setfit
# SetFit with sentence-transformers/paraphrase-MiniLM-L6-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L6-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens - **Number of Classes:** 75 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 9 | <ul><li>'What type of fabric is recommended for creating comfortable clothing that is resistant to wear and tear?'</li><li>'What type of fabric is best for creating garments with slight nubs and variations for a natural look?'</li><li>'Where can I buy durable cotton fabric in deep olive green for everyday wear?'</li></ul> | | 43 | <ul><li>'What is a tightly woven fabric suitable for lightweight jackets and formal trousers?'</li><li>'What fabric is not ideal for garments requiring significant stretch or drape, such as knitwear or flowy dresses?'</li><li>'Which textile is best for garments that need a subtle texture and medium weight?'</li></ul> | | 66 | <ul><li>'Searching for a dark gray textile with a soft texture and fine weave pattern suitable for making skirts and dresses.'</li><li>'What fabric type is recommended for making garments that need to maintain their shape while being comfortable and adaptable for different styles?'</li><li>'Which fabric is suitable for making clothes that maintain their shape but also provide comfort and flexibility?'</li></ul> | | 22 | <ul><li>'What fabric has a raised texture and tight weave for garments that require strength and longevity?'</li><li>'What fabric is recommended for garments that require both comfort and resilience?'</li><li>'What is the best fabric for creating outerwear with a medium weight and good body?'</li></ul> | | 5 | <ul><li>'What kind of textile is suitable for crafting lightweight summer dresses with a fluid drape and hint of elasticity?'</li><li>'What type of textile and weave is consistent with an interlocking loop structure and stretchable properties?'</li><li>'What fabric can I use to make soft loungewear that has a luxurious feel and good performance in apparel?'</li></ul> | | 52 | <ul><li>'What fabric has moisture-wicking properties for sporty summer wear?'</li><li>'Where to find textiles suitable for people with sensitive skin for comfortable wear?'</li><li>'What are the best fabrics for moisture-wicking properties in sporty or casual summer wear?'</li></ul> | | 67 | <ul><li>'What fabric is recommended for making durable clothing with a smooth, consistent grain?'</li><li>'Which fabric has a solid color resembling taupe and a moderate saturation?'</li><li>'What kind of textile is good for creating garments with a soft drape and gentle folds?'</li></ul> | | 32 | <ul><li>'What fabric is best suited for creating clothing with a fine gauge knit and a smooth flow for ease of movement?'</li><li>'What fabric is ideal for making form-fitting leggings and sports tops with good stretch and flexibility?'</li><li>'What type of fabric is recommended for crafting garments with a consistent dark gray hue and a slight sheen on the surface?'</li></ul> | | 53 | <ul><li>'Where can I find a high-quality textile ideal for making athletic wear with stretchability?'</li><li>'What textile is perfect for making garments that require both structure and elasticity?'</li><li>'Which fabric is ideal for creating athletic wear with strong saturation and even color distribution?'</li></ul> | | 16 | <ul><li>'What fabric has a textured surface with visible loops and a cozy hand feel?'</li><li>'What fabric is best for making durable garments that have a mottled black, white, and gray appearance?'</li><li>'What type of fabric displays a mottled grayscale coloration with a melange effect?'</li></ul> | | 4 | <ul><li>'Which fabric has a fine knit weave, smooth texture, and a slight sheen?'</li><li>'What is the most suitable fabric for creating clothing items for individuals with sensitive skin?'</li><li>'What fabric can I use for creating lightweight and breathable summer tops with a soft texture?'</li></ul> | | 65 | <ul><li>'What type of fabric is this deep blue twill textile with a slight rough texture and medium-weight suitable for?'</li><li>'What fabric would be suitable for making comfortable and form-fitting jeans?'</li><li>'What type of fabric is ideal for making durable and form-fitting jeans?'</li></ul> | | 55 | <ul><li>'What is the composition of the knit fabric with a fluid drape and some stretch?'</li><li>'What fabric would be best for making form-fitting dresses that require some stretch and elasticity?'</li><li>'What fabric is suitable for form-fitting clothing like t-shirts, leggings, and dresses?'</li></ul> | | 12 | <ul><li>'Which textile is suitable for garments that need a delicate fall and a matte finish?'</li><li>'What fabric is recommended for creating linings in apparel due to its lightness and versatility?'</li><li>'What is a versatile fabric option for making shirts that are both comfortable and durable?'</li></ul> | | 71 | <ul><li>'Which textile exhibits a striped pattern achieved through yarn dyeing for a sharp contrast?'</li><li>'What type of cotton fabric has a smooth texture and is suitable for making summer dresses?'</li><li>'Which fabric is suitable for making casual shirting with a soft hand feel and fluid drape?'</li></ul> | | 25 | <ul><li>'What material is floppy with some flexibility but not significant stretch?'</li><li>'Which fabric is better for utility wear rather than structured silhouettes?'</li><li>'What textile has small colorful fibers and lacks a traditional woven or knitted structure?'</li></ul> | | 6 | <ul><li>'What fabric is suitable for casual wear and layering in various climates with a subtle sheen and clean surface?'</li><li>'What fabric can I use to make moisture-wicking clothing suitable for people with sensitive skin and a versatile look?'</li><li>'What fabric can I use to create garments that have a neat finish and attention to detail in the textile processing?'</li></ul> | | 20 | <ul><li>'What fabric is versatile for multi-seasonal use, durable, and maintains its shape over time?'</li><li>'What fabric is recommended for making leggings and casual wear with a balanced drape and consistent coloring?'</li><li>'Where can I find a fabric suitable for multi-seasonal use with a consistent hue and soft hand texture?'</li></ul> | | 10 | <ul><li>'What type of cotton fabric is ideal for making casual shirts and trousers?'</li><li>'Which fabric has a soft drape and medium weight for making versatile garments?'</li><li>'What type of fabric is ideal for making versatile garments with good movement and flow?'</li></ul> | | 0 | <ul><li>'Which fabric has a clean appearance with a subtle sheen from bamboo fibers?'</li><li>'Which fabric is ideal for making garments that need to maintain their shape but have some stretch?'</li><li>'What fabric is recommended for making garments with a clean and even black color without significant variations or patterns?'</li></ul> | | 42 | <ul><li>'What fabric is suitable for making versatile dresses with a fluid drape and stretchy feel?'</li><li>'What type of knit fabric is recommended for creating garments that require a fluid drape and some degree of elasticity?'</li><li>'Where can I find a vibrant red fabric with high saturation for making eye-catching garments?'</li></ul> | | 57 | <ul><li>'What type of fabric is light grey with a cool undertone and has a soft, fluid drape?'</li><li>'What material is best for making comfortable and durable clothing suitable for regular wear?'</li><li>'Which fabric offers a combination of comfort, durability, and stretch for versatile garment applications?'</li></ul> | | 36 | <ul><li>'What fabric can I use to make comfortable and flexible activewear?'</li><li>'What type of fabric is best for making lightweight sweaters with a smooth texture?'</li><li>'What type of textile is best for making layering pieces for cooler climates?'</li></ul> | | 37 | <ul><li>'What textile is smooth with fine threads and a gentle drape?'</li><li>'What is the best fabric for creating breathable and comfortable dresses for warm weather?'</li><li>'What type of fabric is best for creating lightweight blouses with a soft drape?'</li></ul> | | 58 | <ul><li>"Which textile is lightweight and breathable, suitable for children's wear with a green and blue floral design?"</li><li>'Ideal textile for t-shirts that require a degree of stretchability'</li><li>'Which fabric is recommended for creating garments with moisture-wicking properties and a vibrant color palette?'</li></ul> | | 56 | <ul><li>'What type of fabric is this medium grey textile with a smooth drape and slight stretch?'</li><li>'What is the best fabric for making light sweaters that are durable and long-lasting?'</li><li>'What type of fabric is ideal for making everyday wear garments with a smooth texture and solid color?'</li></ul> | | 17 | <ul><li>'What fabric is textured with fine loops and suitable for creating garments that require some structural qualities?'</li><li>'What fabric exhibits a brushed or fleeced finish and would be perfect for crafting cozy winter clothing?'</li><li>'What fabric is recommended for fall and winter activewear due to its warmth and comfort?'</li></ul> | | 72 | <ul><li>'What is a versatile cotton fabric with fine to medium thread count, perfect for creating breathable garments for warm climates?'</li><li>'What fabric is ideal for making blouses and dresses with a simple, unadorned aesthetic?'</li><li>'What fabric is suitable for creating durable and versatile garments without unique finishes?'</li></ul> | | 54 | <ul><li>'Looking for a fabric suitable for making lightweight jackets with a soft drape.'</li><li>'What type of fabric is commonly used in t-shirts for a comfortable and breathable feel?'</li><li>'What kind of textile weave is ideal for crafting casual t-shirts with some stretchability?'</li></ul> | | 59 | <ul><li>'Where can I find a knit fabric with a slightly textured surface and fine, soft feel that is comfortable for casual wear?'</li><li>'What fabric is versatile and comfortable for casual wear?'</li><li>'What knit fabric is ideal for making dresses that require a bit of stretch and versatility in styling?'</li></ul> | | 60 | <ul><li>'What fabric would be suitable for making t-shirts that conform well to body shapes and have vibrant hues?'</li><li>'Where can I find a jersey knit fabric with a smooth texture and fine knit structure suitable for t-shirts?'</li><li>'What type of fabric is this deep purple floral patterned material made of?'</li></ul> | | 1 | <ul><li>'What is the best fabric for making clothing with moisture-wicking properties?'</li><li>'What type of fabric would be recommended for creating structured garments that also offer stretch and flexibility?'</li><li>'What is the best fabric for making clothing with moisture-wicking properties?'</li></ul> | | 47 | <ul><li>'What type of textile is ideal for making spring and summer leggings with a smooth texture and stretchability?'</li><li>'Which fabric is lightweight and ideal for creating leggings that maintain their shape and offer flexibility?'</li><li>'What fabric composition is suitable for creating lightweight jackets that allow for movement and breathability?'</li></ul> | | 28 | <ul><li>'What fabric is suitable for making blouses, dresses, skirts, and lightweight jackets?'</li><li>'What fabric with a smooth surface and medium weight is suitable for structured garments?'</li><li>'What fabric is durable and likely to maintain its color and shape well?'</li></ul> | | 13 | <ul><li>'Which fabric is recommended for casual loungewear that needs to be both comfortable and resilient?'</li><li>'What is the best fabric blend for making soft and durable lightweight sweaters?'</li><li>'What type of fabric offers a good balance between performance and aesthetics for everyday wear?'</li></ul> | | 26 | <ul><li>'What fabric has a plain weave pattern, smooth surface, and fine thread count with a slight sheen?'</li><li>'Is there a fabric with moderate strength and a smooth finish ideal for creating garments with soft silhouettes?'</li><li>'What fabric is 100% Rayon, lightweight, and ideal for creating garments with soft silhouettes?'</li></ul> | | 15 | <ul><li>'What knit fabric would be suitable for making cozy apparel with warmth without excessive bulk?'</li><li>'Which fabric is best for creating casual wear with an understated aesthetic and versatile appeal?'</li><li>'What type of fabric is characterized by a melange of earthy tones with a heathered effect?'</li></ul> | | 50 | <ul><li>'Where can I find a vibrant blue fabric with consistent dye saturation for t-shirts and activewear?'</li><li>'What fabric is best for creating clothing with a consistent, even dye and some stretchability for comfort and durability?'</li><li>'Where can I find a knit fabric with vibrant blue color and a smooth, fine texture?'</li></ul> | | 24 | <ul><li>'What type of polyester fabric offers a comfortable fit with a moderate drape for daily wear?'</li><li>'What fabric has a textured surface and slight elasticity for comfortable fit?'</li><li>'What type of textile is recommended for garments that require consistent saturation and evenness in color?'</li></ul> | | 29 | <ul><li>'Which fabric is ideal for creating garments that can withstand regular wear and maintain their texture over time?'</li><li>'What type of fabric has a consistent grey hue with a subtle mottled appearance?'</li><li>'What polyester textile has a micro crinkle texture and fine threads?'</li></ul> | | 44 | <ul><li>'What knit textile is suitable for creating casual dresses with a fluid drape and soft texture?'</li><li>"I'm searching for a jersey knit fabric with durable, wrinkle-resistant properties for everyday wear, do you have any options?"</li><li>'What type of knit fabric is recommended for everyday apparel due to its comfort and ease of movement?'</li></ul> | | 38 | <ul><li>'What fabric is best for creating blouses with a clean and crisp appearance?'</li><li>'What type of fabric provides a combination of durability and practicality for everyday wear garments?'</li><li>"I'm looking for a fabric with a clean and crisp appearance that is durable and easy to care for, any suggestions?"</li></ul> | | 23 | <ul><li>'What fabric is appropriate for garments that require a hint of texture in the surface?'</li><li>'What type of fabric is suitable for creating structured jackets and trousers with a professional look?'</li><li>'What fabric is suitable for making medium-weight garments with a hint of roughness in texture?'</li></ul> | | 45 | <ul><li>'Interested in a fabric with stretch and recovery for making garments that require some elasticity and resilience?'</li><li>'Which fabric is recommended for creating durable clothing suitable for people with sensitive skin, featuring a smooth texture and vibrant blue color with white dots?'</li><li>'What fabric is recommended for making polka dot clothing with a smooth surface and vibrant color?'</li></ul> | | 31 | <ul><li>'What type of knit textile is recommended for creating layering pieces in solid, dark colors?'</li><li>'What is a versatile fabric for creating garments with a matte finish and uniform color?'</li><li>'Which fabric is suitable for activewear, leggings, and fitted tops due to its stretchability?'</li></ul> | | 19 | <ul><li>"What type of fabric is ideal for making playful children's wear with a vibrant speckled pattern?"</li><li>'Which fabric is suitable for crafting garments that can hide wear and minor soiling due to its unique speckled pattern?'</li><li>'What fabric offers good recovery and fit due to elastane content?'</li></ul> | | 11 | <ul><li>'What is a medium weight textile with a soft drape for creating versatile garments?'</li><li>'What fabric is lightweight and breathable, perfect for making soft summer blouses?'</li><li>'Which fabric is suitable for making soft and comfortable shirts and blouses with a consistent light blue hue?'</li></ul> | | 73 | <ul><li>'What type of fabric is suitable for apparel that requires both form and function?'</li><li>'Best fabric for creating statement pieces with a pop of color using a twill weave texture?'</li><li>'Which fabric has a slightly textured surface with medium fineness threads, ideal for structured garments?'</li></ul> | | 64 | <ul><li>'What fabric would be best for making pants that maintain their shape while offering flexibility?'</li><li>'What fabric blend offers both comfort and durability for creating long-lasting clothing?'</li><li>'Which fabric is known for its simple yet durable qualities with no unique finishes?'</li></ul> | | 35 | <ul><li>'What type of fabric is recommended for creating breathable and comfortable clothing for warm weather?'</li><li>'What fabric would be suitable for making lightweight sweaters with a ribbed texture and soft hand?'</li><li>'What type of fabric is best for making form-fitting t-shirts with a fluid drape?'</li></ul> | | 21 | <ul><li>'What fabric blend offers durability and slight stretchability for structured yet comfortable dresses?'</li><li>'What fabric is durable yet versatile for various garment constructions?'</li><li>'What type of cloth is versatile for various seasons due to its weight and composition?'</li></ul> | | 74 | <ul><li>'Need medium weight cotton fabric for creating casual shirts with a balanced color scheme?'</li><li>'Looking for plain weave cotton fabric with a fine thread count and even color distribution?'</li><li>'Which textile is versatile for various seasons like spring and summer due to its lightness?'</li></ul> | | 3 | <ul><li>'Looking for a fabric for casual apparel applications in mild to warm climates with consistent dyeing?'</li><li>'Which fabric blend is recommended for creating apparel with both breathability and a gentle flow?'</li><li>'Where can I purchase a bamboo-spandex blend fabric suitable for all-season clothing with moisture-wicking properties?'</li></ul> | | 8 | <ul><li>'What type of fabric is ideal for creating form-fitting tops with a fluid drape?'</li><li>'What fabric composition combines bamboo and Pret fibers for eco-friendly benefits?'</li><li>'What fabric can I use to make elegant and comfortable cardigans with stretch properties?'</li></ul> | | 18 | <ul><li>'What fabric is recommended for making lightweight garments with a smooth flow and gentle folds?'</li><li>'What type of knit fabric is ideal for creating dresses with moderate stretchability?'</li><li>'What textile composition includes elastane and bamboo for stretchability and comfort in casual apparel?'</li></ul> | | 49 | <ul><li>'What is the ideal textile for crafting activewear with moderate weight and stretch?'</li><li>'Where can I find a jersey knit textile with a soft texture and fine fibers for casual wear?'</li><li>'What is the recommended material for making activewear that allows for ease of movement?'</li></ul> | | 27 | <ul><li>'What is the recommended fabric for creating spring and summer wear with a focus on breathability?'</li><li>'Which textile is recommended for creating blouses, skirts, and other apparel due to its natural sheen and uniform texture?'</li><li>'What type of fabric has a consistent coloration and high level of saturation for apparel applications?'</li></ul> | | 63 | <ul><li>'Which fabric has a plain weave construction and a fine thread count for a smooth texture?'</li><li>'What fabric is durable and versatile for everyday wear?'</li><li>'What fabric can be used to make form-fitting clothing like dresses, thanks to its stretchability?'</li></ul> | | 61 | <ul><li>'What fabric can I use to make casual dresses with a smooth texture and a lightweight feel?'</li><li>'What is a fabric with a tight structure and smooth drape ideal for making casual summer outfits?'</li><li>'What type of fabric is lightweight, breathable, and suitable for layering in variable climates?'</li></ul> | | 34 | <ul><li>'What fabric is a periwinkle blue color with medium saturation and no visible defects?'</li><li>'What fabric has a soft and smooth texture with fine threads and a knit pattern?'</li><li>'Searching for a fabric that is durable, breathable, and suitable for people with sensitive skin, any options?'</li></ul> | | 30 | <ul><li>'Are there any fabrics with a simple weave pattern that offer stretchability for semi-fitted garments?'</li><li>'What is the best fabric for creating garments with a good balance of structure and elasticity?'</li><li>'What fabric is suitable for creating garments that require good stretchability and resilience?'</li></ul> | | 7 | <ul><li>'What type of fabric is commonly used in casual wear, loungewear, and active wear due to its durability and performance?'</li><li>'What type of fabric is suitable for creating comfortable loungewear and lightweight sweaters with a fine, smooth texture and good fabric care?'</li><li>'What is the best fabric for making active wear that offers breathability and performance?'</li></ul> | | 14 | <ul><li>'What material provides a fluid drape and enough structure for t-shirts and lounge pants?'</li><li>'What fabric should I choose for producing clothing with good colorfastness and ease of care in a polyester composition?'</li><li>'What is the best material for creating casual dresses with a medium weight drape and a mix of darker and lighter grey tones?'</li></ul> | | 48 | <ul><li>'What is the best fabric for making comfortable and stretchy t-shirts with a casual aesthetic?'</li><li>'Where can I buy a knit fabric that is versatile in styling and functional qualities for a range of clothing?'</li><li>'What type of fabric is durable and suitable for everyday wear with a casual aesthetic?'</li></ul> | | 2 | <ul><li>'Which fabric contains bamboo and Spandex for creating comfortable casual dresses?'</li><li>'What fabric has a fluid drape and slight elasticity, suitable for summer dresses?'</li><li>'What is the recommended fabric for creating draped garments like dresses or tunics?'</li></ul> | | 46 | <ul><li>'Which fabric is ideal for creating lightweight sweaters with a comfortable and breathable feel?'</li><li>'What type of fabric is ideal for making casual t-shirts with a vibrant striped pattern?'</li><li>'What is the recommended textile for making versatile garments that can be layered in cooler climates?'</li></ul> | | 51 | <ul><li>'What knit fabric is versatile for use in various seasons and holds its shape well?'</li><li>'What type of fabric is recommended for creating casual tops with a gentle, soft drape?'</li><li>'What fabric is suitable for making lightweight and comfortable casual tops for everyday wear?'</li></ul> | | 39 | <ul><li>'What fabric would be recommended for making moisture-wicking blouses suitable for warm climates?'</li><li>'What fabric would be apt for creating garments that require a fine, even weave structure?'</li><li>'What is a suitable fabric for creating drapery in light jackets with a slight sheen?'</li></ul> | | 70 | <ul><li>'What type of cotton fabric is ideal for making shirts and blouses with a soft drape?'</li><li>'What textile has a slightly textured surface with a fine yet distinct weave?'</li><li>'Which cotton fabric is versatile and suitable for both menswear and womenswear?'</li></ul> | | 68 | <ul><li>'Which fabric is breathable and soft to the touch, suitable for creating comfortable dresses?'</li><li>'Which fabric is recommended for making year-round garments with high color saturation?'</li><li>'What fabric can be used for making shirts, pants, and dresses that require a smooth drape and a hint of elasticity?'</li></ul> | | 40 | <ul><li>'Which fabric is ideal for creating spring and summer collections with a soft touch and lightweight feel?'</li><li>'What textile is known for its easy care and durability in garment construction?'</li><li>'What type of fabric is best suited for creating blouses with a flowing drape and smooth texture?'</li></ul> | | 69 | <ul><li>'Which fabric is durable, resilient, and has a slight give due to the Spandex content?'</li><li>'What fabric has a consistent charcoal gray hue with a matte finish and a twill weave pattern?'</li><li>'What fabric is recommended for making form-fitting jackets that are both durable and breathable?'</li></ul> | | 33 | <ul><li>'What fabric would be suitable for creating draped skirts with a smooth surface and stretchability?'</li><li>'What is the best textile for creating draped skirts with a subtle iridescence?'</li><li>'Searching for a fabric with a smooth texture and slight shimmer effect for draped skirts?'</li></ul> | | 41 | <ul><li>'What fabric has a soft drape and gentle folds, making it perfect for creating flowy and comfortable spring and summer dresses?'</li><li>'What type of knit fabric offers good resistance to wrinkles and shrinkage for practical everyday wear?'</li><li>'Searching for a polyester knit fabric with a consistent hue and saturation for making versatile and adaptable garments.'</li></ul> | | 62 | <ul><li>'Which fabric is versatile and suitable for creating durable garments for everyday wear?'</li><li>'What fabric is suitable for making casual wear like t-shirts, dresses, and tops?'</li><li>'What fabric is known for its stable weave with a small percentage of elastane for comfort and durability?'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.3463 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Jazielinho/fabric_model") # Run inference preds = model("What fabric has a comfortable feel and is suitable for people with sensitive skin?") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 7 | 15.4858 | 30 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 39 | | 1 | 40 | | 2 | 41 | | 3 | 32 | | 4 | 37 | | 5 | 33 | | 6 | 36 | | 7 | 40 | | 8 | 30 | | 9 | 36 | | 10 | 42 | | 11 | 38 | | 12 | 39 | | 13 | 43 | | 14 | 41 | | 15 | 41 | | 16 | 35 | | 17 | 42 | | 18 | 40 | | 19 | 43 | | 20 | 44 | | 21 | 36 | | 22 | 37 | | 23 | 40 | | 24 | 44 | | 25 | 42 | | 26 | 41 | | 27 | 38 | | 28 | 41 | | 29 | 46 | | 30 | 41 | | 31 | 38 | | 32 | 40 | | 33 | 39 | | 34 | 41 | | 35 | 44 | | 36 | 45 | | 37 | 40 | | 38 | 37 | | 39 | 44 | | 40 | 39 | | 41 | 42 | | 42 | 36 | | 43 | 43 | | 44 | 42 | | 45 | 37 | | 46 | 41 | | 47 | 44 | | 48 | 36 | | 49 | 40 | | 50 | 43 | | 51 | 44 | | 52 | 39 | | 53 | 38 | | 54 | 38 | | 55 | 43 | | 56 | 41 | | 57 | 44 | | 58 | 40 | | 59 | 41 | | 60 | 35 | | 61 | 43 | | 62 | 41 | | 63 | 43 | | 64 | 37 | | 65 | 41 | | 66 | 36 | | 67 | 38 | | 68 | 42 | | 69 | 41 | | 70 | 39 | | 71 | 43 | | 72 | 34 | | 73 | 40 | | 74 | 41 | ### Training Hyperparameters - batch_size: (256, 256) - num_epochs: (20, 20) - max_steps: -1 - sampling_strategy: undersampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:-------:|:-------------:|:---------------:| | 0.0000 | 1 | 0.2732 | - | | 0.0015 | 50 | 0.2545 | - | | 0.0029 | 100 | 0.2538 | - | | 0.0044 | 150 | 0.2633 | - | | 0.0058 | 200 | 0.2598 | - | | 0.0073 | 250 | 0.2624 | - | | 0.0087 | 300 | 0.2537 | - | | 0.0102 | 350 | 0.2592 | - | | 0.0116 | 400 | 0.2475 | - | | 0.0131 | 450 | 0.2483 | - | | 0.0145 | 500 | 0.2418 | - | | 0.0160 | 550 | 0.2403 | - | | 0.0174 | 600 | 0.2386 | - | | 0.0189 | 650 | 0.2542 | - | | 0.0203 | 700 | 0.237 | - | | 0.0218 | 750 | 0.2423 | - | | 0.0232 | 800 | 0.2421 | - | | 0.0247 | 850 | 0.2409 | - | | 0.0261 | 900 | 0.2453 | - | | 0.0276 | 950 | 0.2404 | - | | 0.0290 | 1000 | 0.2418 | - | | 0.0305 | 1050 | 0.2454 | - | | 0.0319 | 1100 | 0.2446 | - | | 0.0001 | 1 | 0.2471 | - | | 0.0058 | 50 | 0.2375 | - | | 0.0116 | 100 | 0.2351 | - | | 0.0174 | 150 | 0.2406 | - | | 0.0232 | 200 | 0.2382 | - | | 0.0290 | 250 | 0.2374 | - | | 0.0000 | 1 | 0.2515 | - | | 0.0007 | 50 | 0.2335 | - | | 0.0015 | 100 | 0.229 | - | | 0.0022 | 150 | 0.2387 | - | | 0.0029 | 200 | 0.2209 | - | | 0.0036 | 250 | 0.2367 | - | | 0.0044 | 300 | 0.2521 | - | | 0.0051 | 350 | 0.239 | - | | 0.0058 | 400 | 0.2405 | - | | 0.0065 | 450 | 0.2541 | - | | 0.0073 | 500 | 0.2308 | - | | 0.0080 | 550 | 0.2381 | - | | 0.0087 | 600 | 0.2456 | - | | 0.0094 | 650 | 0.2301 | - | | 0.0102 | 700 | 0.2486 | - | | 0.0109 | 750 | 0.2243 | - | | 0.0116 | 800 | 0.2399 | - | | 0.0123 | 850 | 0.2341 | - | | 0.0131 | 900 | 0.2417 | - | | 0.0138 | 950 | 0.215 | - | | 0.0145 | 1000 | 0.2264 | - | | 0.0152 | 1050 | 0.2161 | - | | 0.0160 | 1100 | 0.2273 | - | | 0.0167 | 1150 | 0.2345 | - | | 0.0174 | 1200 | 0.2302 | - | | 0.0181 | 1250 | 0.2337 | - | | 0.0189 | 1300 | 0.2278 | - | | 0.0196 | 1350 | 0.2345 | - | | 0.0203 | 1400 | 0.2323 | - | | 0.0210 | 1450 | 0.2371 | - | | 0.0218 | 1500 | 0.2217 | - | | 0.0225 | 1550 | 0.2282 | - | | 0.0232 | 1600 | 0.224 | - | | 0.0239 | 1650 | 0.2346 | - | | 0.0247 | 1700 | 0.2087 | - | | 0.0254 | 1750 | 0.2299 | - | | 0.0261 | 1800 | 0.2154 | - | | 0.0268 | 1850 | 0.2108 | - | | 0.0276 | 1900 | 0.216 | - | | 0.0283 | 1950 | 0.2128 | - | | 0.0290 | 2000 | 0.2083 | - | | 0.0297 | 2050 | 0.2053 | - | | 0.0305 | 2100 | 0.2265 | - | | 0.0312 | 2150 | 0.2245 | - | | 0.0319 | 2200 | 0.2036 | - | | 0.0326 | 2250 | 0.2192 | - | | 0.0334 | 2300 | 0.2259 | - | | 0.0341 | 2350 | 0.2038 | - | | 0.0348 | 2400 | 0.2129 | - | | 0.0355 | 2450 | 0.2029 | - | | 0.0363 | 2500 | 0.1883 | - | | 0.0370 | 2550 | 0.187 | - | | 0.0377 | 2600 | 0.2083 | - | | 0.0384 | 2650 | 0.2138 | - | | 0.0392 | 2700 | 0.2057 | - | | 0.0399 | 2750 | 0.2134 | - | | 0.0406 | 2800 | 0.2008 | - | | 0.0413 | 2850 | 0.2018 | - | | 0.0421 | 2900 | 0.2226 | - | | 0.0428 | 2950 | 0.1815 | - | | 0.0435 | 3000 | 0.1943 | - | | 0.0442 | 3050 | 0.1926 | - | | 0.0450 | 3100 | 0.1877 | - | | 0.0457 | 3150 | 0.1764 | - | | 0.0464 | 3200 | 0.2021 | - | | 0.0471 | 3250 | 0.2071 | - | | 0.0479 | 3300 | 0.1832 | - | | 0.0486 | 3350 | 0.1714 | - | | 0.0493 | 3400 | 0.1914 | - | | 0.0500 | 3450 | 0.1749 | - | | 0.0508 | 3500 | 0.1752 | - | | 0.0515 | 3550 | 0.1829 | - | | 0.0522 | 3600 | 0.175 | - | | 0.0529 | 3650 | 0.1752 | - | | 0.0537 | 3700 | 0.1973 | - | | 0.0544 | 3750 | 0.1866 | - | | 0.0551 | 3800 | 0.156 | - | | 0.0558 | 3850 | 0.1923 | - | | 0.0566 | 3900 | 0.1683 | - | | 0.0573 | 3950 | 0.1642 | - | | 0.0580 | 4000 | 0.1705 | - | | 0.0587 | 4050 | 0.174 | - | | 0.0595 | 4100 | 0.1609 | - | | 0.0602 | 4150 | 0.17 | - | | 0.0609 | 4200 | 0.1843 | - | | 0.0616 | 4250 | 0.1855 | - | | 0.0624 | 4300 | 0.1385 | - | | 0.0631 | 4350 | 0.1765 | - | | 0.0638 | 4400 | 0.1873 | - | | 0.0645 | 4450 | 0.1654 | - | | 0.0653 | 4500 | 0.1912 | - | | 0.0660 | 4550 | 0.1533 | - | | 0.0667 | 4600 | 0.1759 | - | | 0.0674 | 4650 | 0.154 | - | | 0.0682 | 4700 | 0.147 | - | | 0.0689 | 4750 | 0.161 | - | | 0.0696 | 4800 | 0.1603 | - | | 0.0703 | 4850 | 0.1529 | - | | 0.0711 | 4900 | 0.1538 | - | | 0.0718 | 4950 | 0.1487 | - | | 0.0725 | 5000 | 0.1593 | - | | 0.0732 | 5050 | 0.1491 | - | | 0.0740 | 5100 | 0.1389 | - | | 0.0747 | 5150 | 0.1132 | - | | 0.0754 | 5200 | 0.1622 | - | | 0.0761 | 5250 | 0.1628 | - | | 0.0769 | 5300 | 0.1598 | - | | 0.0776 | 5350 | 0.1362 | - | | 0.0783 | 5400 | 0.1637 | - | | 0.0790 | 5450 | 0.1352 | - | | 0.0798 | 5500 | 0.1523 | - | | 0.0805 | 5550 | 0.1604 | - | | 0.0812 | 5600 | 0.1534 | - | | 0.0819 | 5650 | 0.1206 | - | | 0.0827 | 5700 | 0.1331 | - | | 0.0834 | 5750 | 0.1449 | - | | 0.0841 | 5800 | 0.1376 | - | | 0.0848 | 5850 | 0.1293 | - | | 0.0856 | 5900 | 0.1258 | - | | 0.0863 | 5950 | 0.1391 | - | | 0.0870 | 6000 | 0.1678 | - | | 0.0877 | 6050 | 0.1439 | - | | 0.0885 | 6100 | 0.1329 | - | | 0.0892 | 6150 | 0.1416 | - | | 0.0899 | 6200 | 0.126 | - | | 0.0906 | 6250 | 0.1072 | - | | 0.0914 | 6300 | 0.1314 | - | | 0.0921 | 6350 | 0.1282 | - | | 0.0928 | 6400 | 0.1418 | - | | 0.0935 | 6450 | 0.1418 | - | | 0.0943 | 6500 | 0.1126 | - | | 0.0950 | 6550 | 0.1118 | - | | 0.0957 | 6600 | 0.1437 | - | | 0.0964 | 6650 | 0.1265 | - | | 0.0972 | 6700 | 0.1203 | - | | 0.0979 | 6750 | 0.1267 | - | | 0.0986 | 6800 | 0.11 | - | | 0.0993 | 6850 | 0.1273 | - | | 0.1001 | 6900 | 0.1253 | - | | 0.1008 | 6950 | 0.1145 | - | | 0.1015 | 7000 | 0.1054 | - | | 0.1022 | 7050 | 0.1311 | - | | 0.1030 | 7100 | 0.1238 | - | | 0.1037 | 7150 | 0.0951 | - | | 0.1044 | 7200 | 0.1187 | - | | 0.1051 | 7250 | 0.1114 | - | | 0.1059 | 7300 | 0.1038 | - | | 0.1066 | 7350 | 0.1048 | - | | 0.1073 | 7400 | 0.0965 | - | | 0.1080 | 7450 | 0.1006 | - | | 0.1088 | 7500 | 0.1273 | - | | 0.1095 | 7550 | 0.12 | - | | 0.1102 | 7600 | 0.1055 | - | | 0.0001 | 1 | 0.1192 | - | | 0.0029 | 50 | 0.1128 | - | | 0.0057 | 100 | 0.0981 | - | | 0.0021 | 1 | 0.1188 | - | | 0.1040 | 50 | 0.1121 | - | | 0.0021 | 1 | 0.1172 | - | | 0.1040 | 50 | 0.1109 | - | | 0.2079 | 100 | 0.0965 | - | | 0.3119 | 150 | 0.1013 | - | | 0.4158 | 200 | 0.1157 | - | | 0.5198 | 250 | 0.1093 | - | | 0.6237 | 300 | 0.1029 | - | | 0.7277 | 350 | 0.0904 | - | | 0.8316 | 400 | 0.1084 | - | | 0.9356 | 450 | 0.1127 | - | | **1.0** | **481** | **-** | **0.1883** | | 1.0395 | 500 | 0.0853 | - | | 1.1435 | 550 | 0.0907 | - | | 1.2474 | 600 | 0.0814 | - | | 1.3514 | 650 | 0.0967 | - | | 1.4553 | 700 | 0.118 | - | | 1.5593 | 750 | 0.0841 | - | | 1.6632 | 800 | 0.0992 | - | | 1.7672 | 850 | 0.0965 | - | | 1.8711 | 900 | 0.092 | - | | 1.9751 | 950 | 0.109 | - | | 2.0 | 962 | - | 0.193 | | 2.0790 | 1000 | 0.0847 | - | | 2.1830 | 1050 | 0.0864 | - | | 2.2869 | 1100 | 0.0843 | - | | 2.3909 | 1150 | 0.0792 | - | | 2.4948 | 1200 | 0.0808 | - | | 2.5988 | 1250 | 0.0913 | - | | 2.7027 | 1300 | 0.0848 | - | | 2.8067 | 1350 | 0.0889 | - | | 2.9106 | 1400 | 0.0673 | - | | 3.0 | 1443 | - | 0.1983 | | 3.0146 | 1450 | 0.0671 | - | | 3.1185 | 1500 | 0.0643 | - | | 3.2225 | 1550 | 0.0649 | - | | 3.3264 | 1600 | 0.0827 | - | | 3.4304 | 1650 | 0.0752 | - | | 3.5343 | 1700 | 0.0785 | - | | 3.6383 | 1750 | 0.0629 | - | | 3.7422 | 1800 | 0.0726 | - | | 3.8462 | 1850 | 0.0672 | - | | 3.9501 | 1900 | 0.0704 | - | | 4.0 | 1924 | - | 0.2015 | | 4.0541 | 1950 | 0.0812 | - | | 4.1580 | 2000 | 0.0709 | - | | 4.2620 | 2050 | 0.0866 | - | | 4.3659 | 2100 | 0.0747 | - | | 4.4699 | 2150 | 0.0554 | - | | 4.5738 | 2200 | 0.0636 | - | | 4.6778 | 2250 | 0.0655 | - | | 4.7817 | 2300 | 0.0562 | - | | 4.8857 | 2350 | 0.0531 | - | | 4.9896 | 2400 | 0.0518 | - | | 5.0 | 2405 | - | 0.2056 | | 5.0936 | 2450 | 0.0808 | - | | 5.1975 | 2500 | 0.0571 | - | | 5.3015 | 2550 | 0.066 | - | | 5.4054 | 2600 | 0.071 | - | | 5.5094 | 2650 | 0.0507 | - | | 5.6133 | 2700 | 0.0603 | - | | 5.7173 | 2750 | 0.0548 | - | | 5.8212 | 2800 | 0.0714 | - | | 5.9252 | 2850 | 0.0532 | - | | 6.0 | 2886 | - | 0.208 | | 6.0291 | 2900 | 0.0581 | - | | 6.1331 | 2950 | 0.0663 | - | | 6.2370 | 3000 | 0.0717 | - | | 6.3410 | 3050 | 0.0549 | - | | 6.4449 | 3100 | 0.0611 | - | | 6.5489 | 3150 | 0.0515 | - | | 6.6528 | 3200 | 0.0546 | - | | 6.7568 | 3250 | 0.0406 | - | | 6.8607 | 3300 | 0.0582 | - | | 6.9647 | 3350 | 0.0565 | - | | 7.0 | 3367 | - | 0.2176 | | 7.0686 | 3400 | 0.0737 | - | | 7.1726 | 3450 | 0.0554 | - | | 7.2765 | 3500 | 0.0462 | - | | 7.3805 | 3550 | 0.051 | - | | 7.4844 | 3600 | 0.0441 | - | | 7.5884 | 3650 | 0.0503 | - | | 7.6923 | 3700 | 0.0531 | - | | 7.7963 | 3750 | 0.0464 | - | | 7.9002 | 3800 | 0.0443 | - | | 8.0 | 3848 | - | 0.2234 | | 8.0042 | 3850 | 0.0376 | - | | 8.1081 | 3900 | 0.0542 | - | | 8.2121 | 3950 | 0.0453 | - | | 8.3160 | 4000 | 0.0448 | - | | 8.4200 | 4050 | 0.0535 | - | | 8.5239 | 4100 | 0.0645 | - | | 8.6279 | 4150 | 0.0451 | - | | 8.7318 | 4200 | 0.0472 | - | | 8.8358 | 4250 | 0.0477 | - | | 8.9397 | 4300 | 0.0327 | - | | 9.0 | 4329 | - | 0.2272 | | 9.0437 | 4350 | 0.0346 | - | | 9.1476 | 4400 | 0.0435 | - | | 9.2516 | 4450 | 0.0479 | - | | 9.3555 | 4500 | 0.0508 | - | | 9.4595 | 4550 | 0.0535 | - | | 9.5634 | 4600 | 0.0631 | - | | 9.6674 | 4650 | 0.0286 | - | | 9.7713 | 4700 | 0.0564 | - | | 9.8753 | 4750 | 0.0349 | - | | 9.9792 | 4800 | 0.0487 | - | | 10.0 | 4810 | - | 0.2288 | | 10.0832 | 4850 | 0.0317 | - | | 10.1871 | 4900 | 0.0546 | - | | 10.2911 | 4950 | 0.0353 | - | | 10.3950 | 5000 | 0.0437 | - | | 10.4990 | 5050 | 0.056 | - | | 10.6029 | 5100 | 0.0353 | - | | 10.7069 | 5150 | 0.0304 | - | | 10.8108 | 5200 | 0.0358 | - | | 10.9148 | 5250 | 0.0481 | - | | 11.0 | 5291 | - | 0.2282 | | 11.0187 | 5300 | 0.0318 | - | | 11.1227 | 5350 | 0.0373 | - | | 11.2266 | 5400 | 0.0305 | - | | 11.3306 | 5450 | 0.0443 | - | | 11.4345 | 5500 | 0.0383 | - | | 11.5385 | 5550 | 0.0425 | - | | 11.6424 | 5600 | 0.039 | - | | 11.7464 | 5650 | 0.0443 | - | | 11.8503 | 5700 | 0.0503 | - | | 11.9543 | 5750 | 0.0553 | - | | 12.0 | 5772 | - | 0.2342 | | 12.0582 | 5800 | 0.0362 | - | | 12.1622 | 5850 | 0.0509 | - | | 12.2661 | 5900 | 0.0337 | - | | 12.3701 | 5950 | 0.0436 | - | | 12.4740 | 6000 | 0.0462 | - | | 12.5780 | 6050 | 0.034 | - | | 12.6819 | 6100 | 0.0334 | - | | 12.7859 | 6150 | 0.0365 | - | | 12.8898 | 6200 | 0.047 | - | | 12.9938 | 6250 | 0.0489 | - | | 13.0 | 6253 | - | 0.2317 | | 13.0977 | 6300 | 0.035 | - | | 13.2017 | 6350 | 0.0412 | - | | 13.3056 | 6400 | 0.0358 | - | | 13.4096 | 6450 | 0.0366 | - | | 13.5135 | 6500 | 0.0473 | - | | 13.6175 | 6550 | 0.0481 | - | | 13.7214 | 6600 | 0.0443 | - | | 13.8254 | 6650 | 0.0454 | - | | 13.9293 | 6700 | 0.0344 | - | | 14.0 | 6734 | - | 0.2304 | | 14.0333 | 6750 | 0.0327 | - | | 14.1372 | 6800 | 0.0386 | - | | 14.2412 | 6850 | 0.0503 | - | | 14.3451 | 6900 | 0.0236 | - | | 14.4491 | 6950 | 0.042 | - | | 14.5530 | 7000 | 0.0405 | - | | 14.6570 | 7050 | 0.0339 | - | | 14.7609 | 7100 | 0.0435 | - | | 14.8649 | 7150 | 0.0314 | - | | 14.9688 | 7200 | 0.0263 | - | | 15.0 | 7215 | - | 0.234 | | 15.0728 | 7250 | 0.0369 | - | | 15.1767 | 7300 | 0.0329 | - | | 15.2807 | 7350 | 0.0366 | - | | 15.3846 | 7400 | 0.0401 | - | | 15.4886 | 7450 | 0.0321 | - | | 15.5925 | 7500 | 0.0571 | - | | 15.6965 | 7550 | 0.0353 | - | | 15.8004 | 7600 | 0.0381 | - | | 15.9044 | 7650 | 0.0347 | - | | 16.0 | 7696 | - | 0.2334 | | 16.0083 | 7700 | 0.0341 | - | | 16.1123 | 7750 | 0.0276 | - | | 16.2162 | 7800 | 0.0555 | - | | 16.3202 | 7850 | 0.0338 | - | | 16.4241 | 7900 | 0.0227 | - | | 16.5281 | 7950 | 0.0256 | - | | 16.6320 | 8000 | 0.0356 | - | | 16.7360 | 8050 | 0.0413 | - | | 16.8399 | 8100 | 0.032 | - | | 16.9439 | 8150 | 0.0329 | - | | 17.0 | 8177 | - | 0.2356 | | 17.0478 | 8200 | 0.0382 | - | | 17.1518 | 8250 | 0.0434 | - | | 17.2557 | 8300 | 0.0411 | - | | 17.3597 | 8350 | 0.0329 | - | | 17.4636 | 8400 | 0.0388 | - | | 17.5676 | 8450 | 0.0384 | - | | 17.6715 | 8500 | 0.0306 | - | | 17.7755 | 8550 | 0.0185 | - | | 17.8794 | 8600 | 0.0357 | - | | 17.9834 | 8650 | 0.0349 | - | | 18.0 | 8658 | - | 0.2368 | | 18.0873 | 8700 | 0.0515 | - | | 18.1913 | 8750 | 0.0326 | - | | 18.2952 | 8800 | 0.0367 | - | | 18.3992 | 8850 | 0.0241 | - | | 18.5031 | 8900 | 0.0313 | - | | 18.6071 | 8950 | 0.0275 | - | | 18.7110 | 9000 | 0.0378 | - | | 18.8150 | 9050 | 0.0401 | - | | 18.9189 | 9100 | 0.0285 | - | | 19.0 | 9139 | - | 0.2347 | | 19.0229 | 9150 | 0.0309 | - | | 19.1268 | 9200 | 0.035 | - | | 19.2308 | 9250 | 0.0415 | - | | 19.3347 | 9300 | 0.0301 | - | | 19.4387 | 9350 | 0.0293 | - | | 19.5426 | 9400 | 0.0323 | - | | 19.6466 | 9450 | 0.0342 | - | | 19.7505 | 9500 | 0.0205 | - | | 19.8545 | 9550 | 0.0331 | - | | 19.9584 | 9600 | 0.0226 | - | | 20.0 | 9620 | - | 0.237 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.1 - PyTorch: 2.2.1+cu121 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "sentence-transformers/paraphrase-MiniLM-L6-v2", "widget": [{"text": "What fabric has a comfortable feel and is suitable for people with sensitive skin?"}, {"text": "What is the most recommended fabric for making outerwear that requires a blend of comfort and resilience?"}, {"text": "What fabric has a fluid drape and is ideal for creating lightweight summer dresses?"}, {"text": "Which fabric is best for creating versatile clothing items like casual shirts, blouses, and dresses in a periwinkle blue hue?"}, {"text": "What kind of fabric is suitable for making form-fitting activewear like yoga pants and t-shirts?"}], "pipeline_tag": "text-classification", "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-MiniLM-L6-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.3462566844919786, "name": "Accuracy"}]}]}]}
Jazielinho/fabric_model
null
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-MiniLM-L6-v2", "model-index", "region:us" ]
null
2024-05-01T10:39:17+00:00
[ "2209.11055" ]
[]
TAGS #setfit #safetensors #bert #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/paraphrase-MiniLM-L6-v2 #model-index #region-us
SetFit with sentence-transformers/paraphrase-MiniLM-L6-v2 ========================================================= This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-MiniLM-L6-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a Sentence Transformer with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. Model Details ------------- ### Model Description * Model Type: SetFit * Sentence Transformer body: sentence-transformers/paraphrase-MiniLM-L6-v2 * Classification head: a LogisticRegression instance * Maximum Sequence Length: 128 tokens * Number of Classes: 75 classes ### Model Sources * Repository: SetFit on GitHub * Paper: Efficient Few-Shot Learning Without Prompts * Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts ### Model Labels Evaluation ---------- ### Metrics Uses ---- ### Direct Use for Inference First install the SetFit library: Then you can load this model and run inference. Training Details ---------------- ### Training Set Metrics ### Training Hyperparameters * batch\_size: (256, 256) * num\_epochs: (20, 20) * max\_steps: -1 * sampling\_strategy: undersampling * body\_learning\_rate: (2e-05, 1e-05) * head\_learning\_rate: 0.01 * loss: CosineSimilarityLoss * distance\_metric: cosine\_distance * margin: 0.25 * end\_to\_end: False * use\_amp: False * warmup\_proportion: 0.1 * seed: 42 * eval\_max\_steps: -1 * load\_best\_model\_at\_end: True ### Training Results * The bold row denotes the saved checkpoint. ### Framework Versions * Python: 3.10.12 * SetFit: 1.0.3 * Sentence Transformers: 2.7.0 * Transformers: 4.40.1 * PyTorch: 2.2.1+cu121 * Datasets: 2.19.0 * Tokenizers: 0.19.1 ### BibTeX
[ "### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/paraphrase-MiniLM-L6-v2\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 128 tokens\n* Number of Classes: 75 classes", "### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts", "### Model Labels\n\n\n\nEvaluation\n----------", "### Metrics\n\n\n\nUses\n----", "### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------", "### Training Set Metrics", "### Training Hyperparameters\n\n\n* batch\\_size: (256, 256)\n* num\\_epochs: (20, 20)\n* max\\_steps: -1\n* sampling\\_strategy: undersampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: True", "### Training Results\n\n\n\n* The bold row denotes the saved checkpoint.", "### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1", "### BibTeX" ]
[ "TAGS\n#setfit #safetensors #bert #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/paraphrase-MiniLM-L6-v2 #model-index #region-us \n", "### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/paraphrase-MiniLM-L6-v2\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 128 tokens\n* Number of Classes: 75 classes", "### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts", "### Model Labels\n\n\n\nEvaluation\n----------", "### Metrics\n\n\n\nUses\n----", "### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------", "### Training Set Metrics", "### Training Hyperparameters\n\n\n* batch\\_size: (256, 256)\n* num\\_epochs: (20, 20)\n* max\\_steps: -1\n* sampling\\_strategy: undersampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: True", "### Training Results\n\n\n\n* The bold row denotes the saved checkpoint.", "### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1", "### BibTeX" ]
[ 68, 60, 42, 16, 10, 43, 7, 169, 14, 75, 6 ]
[ "TAGS\n#setfit #safetensors #bert #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/paraphrase-MiniLM-L6-v2 #model-index #region-us \n### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/paraphrase-MiniLM-L6-v2\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 128 tokens\n* Number of Classes: 75 classes### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts### Model Labels\n\n\n\nEvaluation\n----------### Metrics\n\n\n\nUses\n----### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------### Training Set Metrics### Training Hyperparameters\n\n\n* batch\\_size: (256, 256)\n* num\\_epochs: (20, 20)\n* max\\_steps: -1\n* sampling\\_strategy: undersampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: True### Training Results\n\n\n\n* The bold row denotes the saved checkpoint.### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1### BibTeX" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
rainerberger/planetn7
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T10:45:46+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 44, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-generation
transformers
# Merged-Vicuna-RP-Stew-34B Quantized 4.25 exl2 of the model down below: https://huggingface.co/MarinaraSpaghetti/RP-Stew-v2.5-34B Specialized parquet used: https://huggingface.co/datasets/ParasiticRogue/Bluemoon-Light?not-for-all-audiences=true ## Merge Details It's like RP Stew V2, but slightly different. Joint venture between me and MarinaraSpaghetti in trying to get context slightly longer in reach, while also lowering the flowery prose a tad that some users seemed to of had a problem with. Main difference? Just swapped Nontoxic-PiVoT-Bagel and Nyakura-CausalLM-RP's percentages in the recipe. ### Settings Temperature @ 0.8 Min-P @ 0.01 Typical-P @ 0.95 Repetition Penalty @ 1.07 Repetition Range @ 4096 Smoothing Factor @ 0.3 Everything else @ off Early Stopping = X Do Sample = ✓ Add BOS Token = X Ban EOS Token = ✓ Skip Special Tokens = X Temperature Last = ✓ Custom Stopping Strings: "<|im_end|>", "< / s >" (<---without spaces) ### Prompt Format: Chat-Vicuna ``` SYSTEM: {system_prompt}<|im_end|> USER: {prompt}<|im_end|> ASSISTANT: {output}<|im_end|> ``` ### Models Merged The following models were included in the merge: https://huggingface.co/NousResearch/Nous-Capybara-34B https://huggingface.co/migtissera/Tess-34B-v1.5b https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2 https://huggingface.co/maywell/PiVoT-SUS-RP https://huggingface.co/Sao10K/NyakuraV2-34B-Yi-Llama https://huggingface.co/NeverSleep/CausalLM-RP-34B https://huggingface.co/chargoddard/Yi-34B-200K-Llama ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: Nontoxic-PiVoT-Bagel-RP-34b parameters: weight: 0.16 density: 0.42 - model: Nyakura-CausalLM-RP-34B parameters: weight: 0.22 density: 0.54 - model: Tess-34B-v1.5b parameters: weight: 0.28 density: 0.66 - model: Nous-Capybara-34B-V1.9 parameters: weight: 0.34 density: 0.78 merge_method: dare_ties base_model: Yi-34B-200K-Llama parameters: int8_mask: true dtype: bfloat16 ```
{"license": "other", "tags": ["merge", "roleplay", "exl2", "not-for-all-audiences"], "license_name": "yi-34b", "license_link": "https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE"}
ParasiticRogue/RP-Stew-v2.5-34B-exl2-4.25
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "roleplay", "exl2", "not-for-all-audiences", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T10:45:46+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #merge #roleplay #exl2 #not-for-all-audiences #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Merged-Vicuna-RP-Stew-34B Quantized 4.25 exl2 of the model down below: URL Specialized parquet used: URL ## Merge Details It's like RP Stew V2, but slightly different. Joint venture between me and MarinaraSpaghetti in trying to get context slightly longer in reach, while also lowering the flowery prose a tad that some users seemed to of had a problem with. Main difference? Just swapped Nontoxic-PiVoT-Bagel and Nyakura-CausalLM-RP's percentages in the recipe. ### Settings Temperature @ 0.8 Min-P @ 0.01 Typical-P @ 0.95 Repetition Penalty @ 1.07 Repetition Range @ 4096 Smoothing Factor @ 0.3 Everything else @ off Early Stopping = X Do Sample = Add BOS Token = X Ban EOS Token = Skip Special Tokens = X Temperature Last = Custom Stopping Strings: "<|im_end|>", "< / s >" (<---without spaces) ### Prompt Format: Chat-Vicuna ### Models Merged The following models were included in the merge: URL URL URL URL URL URL URL ### Configuration The following YAML configuration was used to produce this model:
[ "# Merged-Vicuna-RP-Stew-34B\n\nQuantized 4.25 exl2 of the model down below:\n\nURL\n\nSpecialized parquet used:\n\nURL", "## Merge Details\n\nIt's like RP Stew V2, but slightly different. Joint venture between me and MarinaraSpaghetti in trying to get context slightly longer in reach, while also lowering the flowery prose a tad that some users seemed to of had a problem with. Main difference? Just swapped Nontoxic-PiVoT-Bagel and Nyakura-CausalLM-RP's percentages in the recipe.", "### Settings\n\nTemperature @ 0.8\n\nMin-P @ 0.01\n\nTypical-P @ 0.95\n\nRepetition Penalty @ 1.07\n\nRepetition Range @ 4096\n\nSmoothing Factor @ 0.3\n\nEverything else @ off\n\nEarly Stopping = X\n\nDo Sample = \n\nAdd BOS Token = X\n\nBan EOS Token = \n\nSkip Special Tokens = X\n\nTemperature Last = \n\nCustom Stopping Strings: \"<|im_end|>\", \"< / s >\" (<---without spaces)", "### Prompt Format: Chat-Vicuna", "### Models Merged\n\nThe following models were included in the merge:\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #roleplay #exl2 #not-for-all-audiences #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Merged-Vicuna-RP-Stew-34B\n\nQuantized 4.25 exl2 of the model down below:\n\nURL\n\nSpecialized parquet used:\n\nURL", "## Merge Details\n\nIt's like RP Stew V2, but slightly different. Joint venture between me and MarinaraSpaghetti in trying to get context slightly longer in reach, while also lowering the flowery prose a tad that some users seemed to of had a problem with. Main difference? Just swapped Nontoxic-PiVoT-Bagel and Nyakura-CausalLM-RP's percentages in the recipe.", "### Settings\n\nTemperature @ 0.8\n\nMin-P @ 0.01\n\nTypical-P @ 0.95\n\nRepetition Penalty @ 1.07\n\nRepetition Range @ 4096\n\nSmoothing Factor @ 0.3\n\nEverything else @ off\n\nEarly Stopping = X\n\nDo Sample = \n\nAdd BOS Token = X\n\nBan EOS Token = \n\nSkip Special Tokens = X\n\nTemperature Last = \n\nCustom Stopping Strings: \"<|im_end|>\", \"< / s >\" (<---without spaces)", "### Prompt Format: Chat-Vicuna", "### Models Merged\n\nThe following models were included in the merge:\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ 55, 36, 91, 100, 10, 28, 16 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #roleplay #exl2 #not-for-all-audiences #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Merged-Vicuna-RP-Stew-34B\n\nQuantized 4.25 exl2 of the model down below:\n\nURL\n\nSpecialized parquet used:\n\nURL## Merge Details\n\nIt's like RP Stew V2, but slightly different. Joint venture between me and MarinaraSpaghetti in trying to get context slightly longer in reach, while also lowering the flowery prose a tad that some users seemed to of had a problem with. Main difference? Just swapped Nontoxic-PiVoT-Bagel and Nyakura-CausalLM-RP's percentages in the recipe.### Settings\n\nTemperature @ 0.8\n\nMin-P @ 0.01\n\nTypical-P @ 0.95\n\nRepetition Penalty @ 1.07\n\nRepetition Range @ 4096\n\nSmoothing Factor @ 0.3\n\nEverything else @ off\n\nEarly Stopping = X\n\nDo Sample = \n\nAdd BOS Token = X\n\nBan EOS Token = \n\nSkip Special Tokens = X\n\nTemperature Last = \n\nCustom Stopping Strings: \"<|im_end|>\", \"< / s >\" (<---without spaces)### Prompt Format: Chat-Vicuna### Models Merged\n\nThe following models were included in the merge:\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
reinforcement-learning
null
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-cartpole2", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "500.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
joosma/Reinforce-cartpole2
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-05-01T10:49:33+00:00
[]
[]
TAGS #CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing CartPole-v1 This is a trained model of a Reinforce agent playing CartPole-v1 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ 32, 46 ]
[ "TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-tiny-hi-capstone This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2348 - Wer: 116.5644 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 14 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 56 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 25 - training_steps: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 2.5312 | 0.02 | 25 | 1.3975 | 141.1837 | | 1.3224 | 0.05 | 50 | 1.2348 | 116.5644 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.0 - Tokenizers 0.15.0
{"language": ["en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "za"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_16_1"], "metrics": ["wer"], "base_model": "openai/whisper-tiny", "pipeline_tag": "automatic-speech-recognition", "model-index": [{"name": "whisper-tiny-hi-capstone", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 11.0", "type": "mozilla-foundation/common_voice_16_1"}, "metrics": [{"type": "wer", "value": 116.5644, "name": "Wer"}]}]}]}
mageec/whisper-tiny-hi-capstone
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "za", "dataset:mozilla-foundation/common_voice_16_1", "base_model:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us", "has_space" ]
null
2024-05-01T10:50:36+00:00
[]
[ "en", "zh", "de", "es", "ru", "ko", "fr", "ja", "pt", "tr", "pl", "ca", "nl", "ar", "sv", "it", "id", "hi", "fi", "vi", "he", "uk", "za" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #en #zh #de #es #ru #ko #fr #ja #pt #tr #pl #ca #nl #ar #sv #it #id #hi #fi #vi #he #uk #za #dataset-mozilla-foundation/common_voice_16_1 #base_model-openai/whisper-tiny #license-apache-2.0 #model-index #endpoints_compatible #region-us #has_space
whisper-tiny-hi-capstone ======================== This model is a fine-tuned version of openai/whisper-tiny on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.2348 * Wer: 116.5644 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 14 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 56 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 25 * training\_steps: 50 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.1.2+cu121 * Datasets 2.16.0 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 14\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 56\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 25\n* training\\_steps: 50\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.0\n* Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #en #zh #de #es #ru #ko #fr #ja #pt #tr #pl #ca #nl #ar #sv #it #id #hi #fi #vi #he #uk #za #dataset-mozilla-foundation/common_voice_16_1 #base_model-openai/whisper-tiny #license-apache-2.0 #model-index #endpoints_compatible #region-us #has_space \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 14\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 56\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 25\n* training\\_steps: 50\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.0\n* Tokenizers 0.15.0" ]
[ 124, 149, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #en #zh #de #es #ru #ko #fr #ja #pt #tr #pl #ca #nl #ar #sv #it #id #hi #fi #vi #he #uk #za #dataset-mozilla-foundation/common_voice_16_1 #base_model-openai/whisper-tiny #license-apache-2.0 #model-index #endpoints_compatible #region-us #has_space \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 14\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 56\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 25\n* training\\_steps: 50\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.0\n* Tokenizers 0.15.0" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tomaszki/stablelm-50
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T10:51:09+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 41, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # isy503-a03 This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the [IMDB Dataset of 50K Movie Reviews](https://www.kaggle.com/datasets/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews) dataset. It achieves the following results on the evaluation set: - Loss: 0.2328 - Accuracy: 0.9318 ## Model description A sentiment analysis model used on a academic excercise to learn and practice **Sentiment Analysis** using DistilBERT. ## Intended uses & limitations It is only an academic excercise, which aims to be the foundation for other excercises such as improving the mdoel using multilanguage processing and multi-feature output (Likert Scale to improve output accuracy, rather than only POSITIVE and NEGATIVE) ## Training and evaluation data The training has been done using the following tutorial: [Hugging Face: Text classification](https://huggingface.co/docs/transformers/en/tasks/sequence_classification). And the evaluation has been done with a random sample of Movie and Amazon Product reviews. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2251 | 1.0 | 1563 | 0.2154 | 0.9189 | | 0.1463 | 2.0 | 3126 | 0.2328 | 0.9318 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["Q-b1t/IMDB-Dataset-of-50K-Movie-Reviews-Backup"], "metrics": ["accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "isy503-a03", "results": []}]}
nicoketterer/isy503-a03
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:Q-b1t/IMDB-Dataset-of-50K-Movie-Reviews-Backup", "base_model:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T10:53:59+00:00
[]
[ "en" ]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #en #dataset-Q-b1t/IMDB-Dataset-of-50K-Movie-Reviews-Backup #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
isy503-a03 ========== This model is a fine-tuned version of distilbert/distilbert-base-uncased on the IMDB Dataset of 50K Movie Reviews dataset. It achieves the following results on the evaluation set: * Loss: 0.2328 * Accuracy: 0.9318 Model description ----------------- A sentiment analysis model used on a academic excercise to learn and practice Sentiment Analysis using DistilBERT. Intended uses & limitations --------------------------- It is only an academic excercise, which aims to be the foundation for other excercises such as improving the mdoel using multilanguage processing and multi-feature output (Likert Scale to improve output accuracy, rather than only POSITIVE and NEGATIVE) Training and evaluation data ---------------------------- The training has been done using the following tutorial: Hugging Face: Text classification. And the evaluation has been done with a random sample of Movie and Amazon Product reviews. Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #en #dataset-Q-b1t/IMDB-Dataset-of-50K-Movie-Reviews-Backup #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 90, 101, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #en #dataset-Q-b1t/IMDB-Dataset-of-50K-Movie-Reviews-Backup #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
wannaphong/numfalm-chat-full
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T10:57:51+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 47, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
null
Mailvita EML to Gmail Importer for Mac Software is an accurate method for importing EML files into a Gmail account. This application exports data from EML files to a Gmail account without causing any data loss. This application is compatible with all email applications that use EML files, including Windows Live Mail, Outlook Express, Thunderbird, Windows Mail, Entourage, and Mac Mail. Users can effortlessly export all emails from EML files to their Gmail account, including attachments. It works with all Mac OS versions, including 13 "Ventura," 12 "Monterey," 11 "Big Sur," 10.15 "Catalina," 10.14 "Mojave," 10.13 "High Sierra," and 10.12 "Sierra." It also works with all Windows OS and Microsoft Outlook versions of the utility. Download the application for the free demo versions. Visit here: https://www.mailvita.com/eml-to-gmail-importer-for-mac/
{}
mailvita/mailvita-eml-to-gmail-importer-for-mac
null
[ "region:us" ]
null
2024-05-01T11:00:06+00:00
[]
[]
TAGS #region-us
Mailvita EML to Gmail Importer for Mac Software is an accurate method for importing EML files into a Gmail account. This application exports data from EML files to a Gmail account without causing any data loss. This application is compatible with all email applications that use EML files, including Windows Live Mail, Outlook Express, Thunderbird, Windows Mail, Entourage, and Mac Mail. Users can effortlessly export all emails from EML files to their Gmail account, including attachments. It works with all Mac OS versions, including 13 "Ventura," 12 "Monterey," 11 "Big Sur," 10.15 "Catalina," 10.14 "Mojave," 10.13 "High Sierra," and 10.12 "Sierra." It also works with all Windows OS and Microsoft Outlook versions of the utility. Download the application for the free demo versions. Visit here: URL
[]
[ "TAGS\n#region-us \n" ]
[ 5 ]
[ "TAGS\n#region-us \n" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.1.dev0
{"library_name": "peft", "base_model": "Trelis/Llama-2-7b-chat-hf-sharded-bf16"}
bobbins228/Llama-2-7b-chat-hf-sharded-bf16-fine-tuned-adapters
null
[ "peft", "arxiv:1910.09700", "base_model:Trelis/Llama-2-7b-chat-hf-sharded-bf16", "region:us" ]
null
2024-05-01T11:00:28+00:00
[ "1910.09700" ]
[]
TAGS #peft #arxiv-1910.09700 #base_model-Trelis/Llama-2-7b-chat-hf-sharded-bf16 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.1.dev0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.1.dev0" ]
[ "TAGS\n#peft #arxiv-1910.09700 #base_model-Trelis/Llama-2-7b-chat-hf-sharded-bf16 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.1.dev0" ]
[ 45, 6, 4, 50, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5, 16 ]
[ "TAGS\n#peft #arxiv-1910.09700 #base_model-Trelis/Llama-2-7b-chat-hf-sharded-bf16 #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact### Framework versions\n\n- PEFT 0.10.1.dev0" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Bert-fine-tuned-WiC This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6958 - Accuracy: 0.6881 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6164 | 1.0 | 679 | 0.6806 | 0.6865 | | 0.4214 | 2.0 | 1358 | 1.0573 | 0.6646 | | 0.2186 | 3.0 | 2037 | 1.3339 | 0.6897 | | 0.1485 | 4.0 | 2716 | 1.5803 | 0.6881 | | 0.115 | 5.0 | 3395 | 1.6958 | 0.6881 | ### Framework versions - Transformers 4.39.3 - Pytorch 1.13.0 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "Bert-fine-tuned-WiC", "results": []}]}
rycecorn/Bert-fine-tuned-WiC
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:01:07+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Bert-fine-tuned-WiC =================== This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.6958 * Accuracy: 0.6881 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 1.13.0 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 1.13.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 1.13.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ 55, 112, 5, 40 ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 1.13.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
null
Voice models for the Mimic 3 text to speech system. Original source: https://github.com/MycroftAI/mimic3-voices
{"language": ["af", "bn", "de", "el", "en", "en", "es", "fa", "fi", "fr", "gu", "ha", "hu", "it", "jv", "ko", "ne", "nl", "pl", "ru", "sw", "te", "tn", "uk", "vi", "yo"], "license": "cc-by-sa-4.0"}
mukowaty/mimic3-voices
null
[ "onnx", "af", "bn", "de", "el", "en", "es", "fa", "fi", "fr", "gu", "ha", "hu", "it", "jv", "ko", "ne", "nl", "pl", "ru", "sw", "te", "tn", "uk", "vi", "yo", "license:cc-by-sa-4.0", "region:us" ]
null
2024-05-01T11:01:09+00:00
[]
[ "af", "bn", "de", "el", "en", "en", "es", "fa", "fi", "fr", "gu", "ha", "hu", "it", "jv", "ko", "ne", "nl", "pl", "ru", "sw", "te", "tn", "uk", "vi", "yo" ]
TAGS #onnx #af #bn #de #el #en #es #fa #fi #fr #gu #ha #hu #it #jv #ko #ne #nl #pl #ru #sw #te #tn #uk #vi #yo #license-cc-by-sa-4.0 #region-us
Voice models for the Mimic 3 text to speech system. Original source: URL
[]
[ "TAGS\n#onnx #af #bn #de #el #en #es #fa #fi #fr #gu #ha #hu #it #jv #ko #ne #nl #pl #ru #sw #te #tn #uk #vi #yo #license-cc-by-sa-4.0 #region-us \n" ]
[ 71 ]
[ "TAGS\n#onnx #af #bn #de #el #en #es #fa #fi #fr #gu #ha #hu #it #jv #ko #ne #nl #pl #ru #sw #te #tn #uk #vi #yo #license-cc-by-sa-4.0 #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
abc88767/model29
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:01:29+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 41, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Nadhir3/Mistral-7B-Instruct-v0.2-fine-tuned
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T11:01:54+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 47, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-generation
transformers
# Uploaded model - **Developed by:** davanstrien - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "orpo"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
davanstrien/LLama-3-dataset-tldr
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "orpo", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:04:35+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #orpo #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: davanstrien - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: davanstrien\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #orpo #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: davanstrien\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 76, 81 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #orpo #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: davanstrien\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Paula-Sanchez/fine_tuned_t5_small_model_wiki_auto
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T11:06:31+00:00
[ "1910.09700" ]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 49, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-generation
transformers
## Baichuan2-7B-Chat-DcardStylePost-SFT ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "JiunYi/Baichuan2-7B-Chat-DcardStylePost-SFT" messages = [{"role": "user", "content": "【任务】:\n你是口碑行销公司的资深写手,现在要撰写一篇Dcard 『闲聊版』的口碑文。\n麻烦熟读下列 【参考资料】将商品或品牌与生活情境的融合,并尽量贴近生活口吻,以不要被发现是业配文为重要目标。\n请依照【口碑切角】为文章的故事主轴撰写整篇文章,请参考并依照【规范】角色口吻来撰写文章,融入角色情境来完成整篇故事撰写,请注重人物角色特色及限制。\n\n-\n\n【规范】:\n\n1.需产出文章标题\n2.请以第一人称方式撰写文章\n3.请记住现在是时间是西元 2023 年\n4.Please write in zh-TW language .\n5.降低口碑文业配感\n6.写作者是一个19~25岁之间的女生的大学生或是社会新鲜人\n7.喜爱吃甜食,常常带著男友一起去踩点试吃看看\n8.除了介绍食物,也喜欢顺便男友发生好笑的趣事来跟食物介绍的故事结合\n9.为降低口碑文业配感,请多多举几个甜点推荐\n10.喜欢应用很多可爱的颜文字和emoji\n\n-\n\n【参考资料】\n▲可丽露\n>>龙眼蜜,所以吃起来不会这么甜,跟其他家的可丽露吃起来真的很有差异\n以野生龙眼蜜减低并取代部分甜度,带出微微酸感的蛋蜜香,外脆内湿润的口感,完整的蜂巢组织度,木质调的兰姆酒香,法国盐之花平衡了整体,经典细致的马达加斯加香草籽原味,请在出炉后的3小时内食用完毕或\"冷冻\"保存,回烤后食用最接近现烤口感!\n\n\n\n▲奶盖布丁\n>>法国盐之花,连盐巴都很用心的甜点师\n带咸度的法国盐之花奶盖,微甜浓郁而不腻口的布蕾布丁体,和著偏苦的手煮焦糖液,是一款有著丰富层次的大人味布丁! 图片为示意仅供参考,食用时请由上方挖到底,品尝完整风味~\n\n【口碑切角】\n男友就像金鱼一样,好像记忆都只有三秒,\n只有三秒就算了还说错很多很好笑的话XD\n我都会带甜点回去给男友吃~结果男友居然说玛莉露很好吃XD\n玛莉露是神奇宝贝,可丽露才是甜点啦!\n分享日常男友都会口误的甜点们"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"language": ["zh"], "license": "gpl-3.0", "tags": ["art", "marketing", "llama-factory"], "metrics": ["bleu"], "base_model": "baichuan-inc/Baichuan2-7B-Chat", "pipeline_tag": "text-generation"}
JiunYi/Baichuan2-7B-Chat-DcardStylePost-SFT
null
[ "transformers", "safetensors", "baichuan", "feature-extraction", "art", "marketing", "llama-factory", "text-generation", "conversational", "custom_code", "zh", "base_model:baichuan-inc/Baichuan2-7B-Chat", "license:gpl-3.0", "region:us" ]
null
2024-05-01T11:06:55+00:00
[]
[ "zh" ]
TAGS #transformers #safetensors #baichuan #feature-extraction #art #marketing #llama-factory #text-generation #conversational #custom_code #zh #base_model-baichuan-inc/Baichuan2-7B-Chat #license-gpl-3.0 #region-us
## Baichuan2-7B-Chat-DcardStylePost-SFT ## Usage
[ "## Baichuan2-7B-Chat-DcardStylePost-SFT", "## Usage" ]
[ "TAGS\n#transformers #safetensors #baichuan #feature-extraction #art #marketing #llama-factory #text-generation #conversational #custom_code #zh #base_model-baichuan-inc/Baichuan2-7B-Chat #license-gpl-3.0 #region-us \n", "## Baichuan2-7B-Chat-DcardStylePost-SFT", "## Usage" ]
[ 71, 19, 3 ]
[ "TAGS\n#transformers #safetensors #baichuan #feature-extraction #art #marketing #llama-factory #text-generation #conversational #custom_code #zh #base_model-baichuan-inc/Baichuan2-7B-Chat #license-gpl-3.0 #region-us \n## Baichuan2-7B-Chat-DcardStylePost-SFT## Usage" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.00001_withdpo_4iters_bs256_531lr_iter_2 This model is a fine-tuned version of [ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1](https://huggingface.co/ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1", "model-index": [{"name": "0.00001_withdpo_4iters_bs256_531lr_iter_2", "results": []}]}
ShenaoZ/0.00001_withdpo_4iters_bs256_531lr_iter_2
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T11:07:05+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.00001_withdpo_4iters_bs256_531lr_iter_2 This model is a fine-tuned version of ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.00001_withdpo_4iters_bs256_531lr_iter_2\n\nThis model is a fine-tuned version of ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.00001_withdpo_4iters_bs256_531lr_iter_2\n\nThis model is a fine-tuned version of ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ 99, 72, 7, 9, 9, 4, 155, 5, 44 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# 0.00001_withdpo_4iters_bs256_531lr_iter_2\n\nThis model is a fine-tuned version of ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1 on the updated and the original datasets.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
image-to-text
transformers
<u><b>We are creating a spatial aware vision-language(VL) model.</b></u> This is a trained model on COCO dataset images including extra information regarding the spatial relationship between the entities of the image. This is a sequence to sequence model for image-captioning. The architecture is <u><b>ViT encoder and GPT2 decoder.</b></u> <details> <summary>Requirements!</summary> - 4GB GPU RAM. - CUDA enabled docker </details> The way to download and run this: ```python device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") from transformers import pipeline image_captioner = pipeline("image-to-text", model="VCL3D/rgb-language_cap", max_new_tokens=200, device=device) filename = 'path/to/file' generated_captions = image_captioner(filename) print(generated_captions) ``` The model is trained to produce as many words as possible with a maximum of 200 tokens, which translates to roughly 5 sentences, while the 6th sentence is usually cropped. <i>The output is always of that form: "Object1" is to the "Left/Right etc." of the "Object2".</i> ## IF YOU WANT TO PRODUCE A SPECIFIC NUMBER OF CAPTIONS UP TO 5. ```python import os def print_up_to_n_sentences(captions, n): for caption in captions: generated_text = caption.get('generated_text', '') sentences = generated_text.split('.') result = '.'.join(sentences[:n]) #print(result) return result filename = 'path/to/file' generated_captions = image_captioner(filename) captions = print_up_to_n_sentences(generated_captions, 5) print(captions) ```
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference"], "metrics": ["code_eval"], "pipeline_tag": "image-to-text"}
voxreality/rgb_language_cap
null
[ "transformers", "pytorch", "safetensors", "vision-encoder-decoder", "text-generation-inference", "image-to-text", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:07:44+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #safetensors #vision-encoder-decoder #text-generation-inference #image-to-text #en #license-apache-2.0 #endpoints_compatible #region-us
<u><b>We are creating a spatial aware vision-language(VL) model.</b></u> This is a trained model on COCO dataset images including extra information regarding the spatial relationship between the entities of the image. This is a sequence to sequence model for image-captioning. The architecture is <u><b>ViT encoder and GPT2 decoder.</b></u> <details> <summary>Requirements!</summary> - 4GB GPU RAM. - CUDA enabled docker </details> The way to download and run this: The model is trained to produce as many words as possible with a maximum of 200 tokens, which translates to roughly 5 sentences, while the 6th sentence is usually cropped. <i>The output is always of that form: "Object1" is to the "Left/Right etc." of the "Object2".</i> ## IF YOU WANT TO PRODUCE A SPECIFIC NUMBER OF CAPTIONS UP TO 5.
[ "## IF YOU WANT TO PRODUCE A SPECIFIC NUMBER OF CAPTIONS UP TO 5." ]
[ "TAGS\n#transformers #pytorch #safetensors #vision-encoder-decoder #text-generation-inference #image-to-text #en #license-apache-2.0 #endpoints_compatible #region-us \n", "## IF YOU WANT TO PRODUCE A SPECIFIC NUMBER OF CAPTIONS UP TO 5." ]
[ 52, 17 ]
[ "TAGS\n#transformers #pytorch #safetensors #vision-encoder-decoder #text-generation-inference #image-to-text #en #license-apache-2.0 #endpoints_compatible #region-us \n## IF YOU WANT TO PRODUCE A SPECIFIC NUMBER OF CAPTIONS UP TO 5." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/farzan-ai/aya-LoRA-.5/runs/rt1ct4fb) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/farzan-ai/aya-LoRA-.5/runs/fxwk6njh) # aya-LoRA-.5 This model is a fine-tuned version of [CohereForAI/aya-101](https://huggingface.co/CohereForAI/aya-101) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.41.0.dev0 - Pytorch 2.2.1 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "CohereForAI/aya-101", "model-index": [{"name": "aya-LoRA-.5", "results": []}]}
Nima-nlc/aya-LoRA-.5
null
[ "peft", "safetensors", "t5", "generated_from_trainer", "base_model:CohereForAI/aya-101", "license:apache-2.0", "region:us" ]
null
2024-05-01T11:08:48+00:00
[]
[]
TAGS #peft #safetensors #t5 #generated_from_trainer #base_model-CohereForAI/aya-101 #license-apache-2.0 #region-us
<img src="URL alt="Visualize in Weights & Biases" width="200" height="32"/> <img src="URL alt="Visualize in Weights & Biases" width="200" height="32"/> # aya-LoRA-.5 This model is a fine-tuned version of CohereForAI/aya-101 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 25 ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.41.0.dev0 - Pytorch 2.2.1 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# aya-LoRA-.5\n\nThis model is a fine-tuned version of CohereForAI/aya-101 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 25", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.41.0.dev0\n- Pytorch 2.2.1\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #safetensors #t5 #generated_from_trainer #base_model-CohereForAI/aya-101 #license-apache-2.0 #region-us \n", "# aya-LoRA-.5\n\nThis model is a fine-tuned version of CohereForAI/aya-101 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 25", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.41.0.dev0\n- Pytorch 2.2.1\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ 44, 34, 7, 9, 9, 4, 93, 5, 54 ]
[ "TAGS\n#peft #safetensors #t5 #generated_from_trainer #base_model-CohereForAI/aya-101 #license-apache-2.0 #region-us \n# aya-LoRA-.5\n\nThis model is a fine-tuned version of CohereForAI/aya-101 on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 25### Training results### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.41.0.dev0\n- Pytorch 2.2.1\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
null
null
this is a demo nlp transformer model
{"license": "mit"}
zunnu/NER_transformer
null
[ "license:mit", "region:us" ]
null
2024-05-01T11:09:13+00:00
[]
[]
TAGS #license-mit #region-us
this is a demo nlp transformer model
[]
[ "TAGS\n#license-mit #region-us \n" ]
[ 9 ]
[ "TAGS\n#license-mit #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/llamafinal3
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T11:10:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 47, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
null
llama 3 trained with custom dataset
{}
iimran/llama3-GGUF
null
[ "gguf", "region:us" ]
null
2024-05-01T11:15:13+00:00
[]
[]
TAGS #gguf #region-us
llama 3 trained with custom dataset
[]
[ "TAGS\n#gguf #region-us \n" ]
[ 9 ]
[ "TAGS\n#gguf #region-us \n" ]
text-generation
transformers
## Model Details
{"language": ["uk"], "license": "apache-2.0", "pipeline_tag": "text-generation"}
marveled/busya
null
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "uk", "doi:10.57967/hf/2163", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T11:16:44+00:00
[]
[ "uk" ]
TAGS #transformers #tensorboard #safetensors #llama #text-generation #uk #doi-10.57967/hf/2163 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
## Model Details
[ "## Model Details" ]
[ "TAGS\n#transformers #tensorboard #safetensors #llama #text-generation #uk #doi-10.57967/hf/2163 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## Model Details" ]
[ 62, 4 ]
[ "TAGS\n#transformers #tensorboard #safetensors #llama #text-generation #uk #doi-10.57967/hf/2163 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n## Model Details" ]
null
transformers
# Uploaded model - **Developed by:** Crysiss - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
Crysiss/llama3-8B-welfare-unsloth-last
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:18:27+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: Crysiss - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: Crysiss\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: Crysiss\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 64, 80 ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: Crysiss\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
null
'Apple ', 'Banana', 'Maize', 'Orange', 'Tomatoes' , 'Watermelom', 'Groundnuts', 'Mango', 'Grapes', 'Cotton', 'Coffee', 'Rice'
{}
duyv/Yolov7-HeThongNhanDienVaDeXuatCayTrong
null
[ "onnx", "region:us" ]
null
2024-05-01T11:20:28+00:00
[]
[]
TAGS #onnx #region-us
'Apple ', 'Banana', 'Maize', 'Orange', 'Tomatoes' , 'Watermelom', 'Groundnuts', 'Mango', 'Grapes', 'Cotton', 'Coffee', 'Rice'
[]
[ "TAGS\n#onnx #region-us \n" ]
[ 8 ]
[ "TAGS\n#onnx #region-us \n" ]
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["llama-factory"]}
Moriacrafter/Qwen-1.8B_DepressionDetection
null
[ "transformers", "safetensors", "qwen", "feature-extraction", "llama-factory", "custom_code", "arxiv:1910.09700", "region:us" ]
null
2024-05-01T11:20:44+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #qwen #feature-extraction #llama-factory #custom_code #arxiv-1910.09700 #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #qwen #feature-extraction #llama-factory #custom_code #arxiv-1910.09700 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 37, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #qwen #feature-extraction #llama-factory #custom_code #arxiv-1910.09700 #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-generation
transformers
### LeroyDyer/Mixtral_AI_CyberUltron_DPO ## TRAINED TO THINK! Using a simple prompt template It has been possible to RE-TRAIN - Some datasets to display the thoughts ; which can rannge from calculations to pathways not chosen to classification tasks : or even language programology: ie X is a Y : etc : Its important to train the llm to have thinging processes for different situations : Such as Role play! so whilst generating responses based on a character the profile is held in thoughts ; so that later generations will stay on the chosen role: any updates or requested updates to the profile can be added to a thought ! any operations requiring the mangement of sub agents ; the thoughts can be used to hold theprocess and operations like a scratchpad! then when responding reply with this scratchpad or simply reply based on the request: hence training again on already sucessfull intergration: enabling for those to become embedded and giving understanding to the llm on the solutions to these question without replacing the expected ansers: When talking normally DO EXPECT the odd thoughts to pop up ! DPO Traiinghas been used to refine the model also : accepting and rejecting some types of responses which are unwanted : Myself i dont mind ALL responses as it leads to character : But its usesfull to give the methodolgy to the llm : enabling for later to reject responses and asking for the model to reformulate an answer: hence in training it was first trained with the rejected answers !!!! then after retrained with the corrections ! <<<<<<< LOL >>>>> hence understanding both sides of the argument: the second instance was given the prompt to reformulate this becase a downvote was recieved or it as rejected by the system for unknown reasons please reformulate this response: This is to give these generalisations to the model as possible requests verbally or written in futre chats : ## CHAT TEMPLATE :::: Hmm Tough one! in training we use many types of prompts and templates : hence not using templates in the model and they should be removed and replace with the template you personally use: as it is a collection of WEIGHTS!::: this is important to understand! How you Query the model is your choice: hence each type of prompt bringing differentaspects out of the model ! comonly i have used the mistral instruct promt but have also used the chat ml prompt ! SO its important that you choose your special tokens (these are tokens that will be masked in the output!)::: i will probably remove any existing templates from the tokenizer !!! ## MORE Fine Tuning ???? WHY!!!! As we know that Fine tuning Only updates the final layer , as well as extration and derankng with lord also extracts this last layer! / Penultimate layer: Hence when fine tuning models ; you CANNOT fine tune on TOP of the fine tuning; Hence merging! So collecting finetuned models and mmerging retains the skills learned by both models wherre as finetuning on top of fine tuning replaces the final layer... even applying loras on top of loras resets you! Hence Finetune!,MERGE!..... Rinse and repeat! Upgrading! Or you can reload the same lora for furthr fine tuning, as some loras even become ery large due to the number of epochs! Essentially a single layer highly tuned expert!! So the next projext is the Mixture of Adapters !.... MoMerge! PhatGoose etc: creating an experts model from loras ! (hopefully 32 models to create a frankenmerger to be directly merged into the main model and re-alligned in!) ## MODELS !! :: : - Why? New base Mode Generation from the final Cybertron series model and the Final CyberSeries Models :| It would seem that some models are not registering on the board ?? perhaps there is a limmit per person ! : followers should know that the cyberboss was my highest model (renamed) And my Cybertron models were heavily merged and trained on many datasets : Even containing thinking pardigms : merging the collection back to base model give the model a great position to begin from ! hence a new base model marker (Untrained/Sharded)(totally unlocked) I had noticed the reality of TopK=1000,TopP=0.78, Temp=0.86 as so, Important with merged models allowing for the model to produce a bit more random results but also giving the model a larger pool to select from: obviously for Role play the model requires Temp to be 1+ ::: ## FineTuning :: Fine tuning models close to 0.9 means that some information is totally Fixed and maynot return without focusing the model ! sometimes to train the model to 1.5+ allowing for loosly trained datas to surface : when higher tempretures are applied ! hence role play datasets being trained at higher loss rates that codeing datasets and math datasets (close to overfitting) Hence Merging playing animportant role in centering the model again ! ## Merging is not just for fun and game! it is a vital part of the training process and locking data into the model as well as sharing data! remember data is not stored in the model:: only the probablity of the information being returned ! ## From here to where ? Currently there is a trend for evaluation ! evaluating the model to discover its weaknesses and threats , removing the specific layers identifed in the model with the ofensive content : enabling for these layers to be trained and replaced ! replace with ?? Replacing layers in the model ; also requires a realignment of information throughout the network ! despite being a copied layer (Still preserving some content) once ofensive content is discovered the network can be trained with its counter argument; hence the evaluation process enabes for the creationn of a custom dataset: targetting these internalized datas! Despite a neural network NOT being a storage system as the retrival process is based oñ probablliities :hence at points in the networ certain emebedding values are present and once translated or decodedd into standard tokens can actually be identidfed! ## WOW!! So ! this also means at each layer the network is actually storing a probablity table , word to word matrix of probab.itys for the next token generation ! IT may even be possible to train a network for image recognition , as long as the images are tokenized into an embedding value associated with the image, Hence image tokenizers : The embedding value produced should enable the output to contain the same images that were present in the training set , ie they have been tokenized and embedded into the model so it should be able to produce an embedding associated with this output ! Hence is should also be possible to retrive the image from the image tokenizer ? so tokens not decoded by the text tokenizer should be handed off to the image tokenizer! to dcode the embedding and return its original (cascade) / digital numercical value (each pixel is a number and with line encoding of images essentially each line can be reconstructed to produce an image, hence ALL images would nbeed to be BitMap/JPEG/PNG acording to the encoder!) MISSION! But still we will need to uinstall all the competition datasets into the mode , so that the original baselines can be established enabling for , after layer removal full realignment to the same dataset collection ! hence retaining all funcitonality, its worth noting that domain specific datasets should also be handled in the same way! MORE TO COME!(look out for the SFT's and Merges) ### Models Merged All my merges are merged using a genetic algorithm: Hence First creating and Y models; These models are merged with my own model and other nice models of the same calibur which are specialized for task: Ie coding, medical , roleplay etc: consider a coding model a Y and a medical a X Consider my base model as target: when creating y or X many merge types are used from dares to slerp but in the final merge only a linear is used ! Hence the X and Y models may even be merged with targets that are not the same model type! each model IS sharded to 1-2GB shards also making it easier to merge! and the final merge merged at 4gb per shard for ewasy downloading ! Important that the final merge is linear!!! if it cannot be merged to linear then there is a diverse problem with the model : the final output is a modl with unknown qualities and often can be a very high performer! but contain some unwanted behavior, ie I AM AN AI , I CANNOT DO THAT , ITS UNETHICAL! as some people have used TOXIC datasets containing such UNWANTEDNESS!- STOP BEING A NANNY TO THE WORLD ! THEN USING THE SAME TACTIC OR KNOWLEDE ON THE PEOPLE! Stop saying FREE SPEECH Then aresting people for SPEAKING OUT! <<<<<< ALL GOVERNMENT INJECTIONS! we need to uncensor our models as the people who release the larger models apply these constraints ??? hence going the chinese route! as they do not have the same restrictions ! (as you know true comunisim is freedom ! as each person should have the ability to have the same as another and it should not be restricted to a select few!, disguised as expensive or restriucted or harmful !) The following models were included in the merge: * Y_Chroma <<<<<<<<<<<< 6 models merged (chat comercial based models, ie: zephr, openchat, antropic etc) * [LeroyDyer/Mixtral_AI_CyberTron_Ultra](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_Ultra) <<< * Model being Upgraded (remixed with CyberBoss/SmartBrain/CyberCoder) hence Meta & google releasing Untrained Models ! * X_Chroma <<<<<<<<<<<< 6 model Merged (maths Focused from wizardMath to MetaMath)
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "code", "medical ", "farmer", "doctor", "Mega-Series", "Cyber-Series", "Role-Play", "Self-Rag", "ThinkingBot", "milestone", "mega-series", "SpydazWebAI", "thinking-AI"], "datasets": ["gretelai/synthetic_text_to_sql", "HuggingFaceTB/cosmopedia", "teknium/OpenHermes-2.5", "Open-Orca/SlimOrca", "Open-Orca/OpenOrca", "cognitivecomputations/dolphin-coder", "databricks/databricks-dolly-15k", "yahma/alpaca-cleaned", "uonlp/CulturaX", "mwitiderrick/SwahiliPlatypus", "swahili", "Rogendo/English-Swahili-Sentence-Pairs", "ise-uiuc/Magicoder-Evol-Instruct-110K", "meta-math/MetaMathQA", "abacusai/ARC_DPO_FewShot", "abacusai/MetaMath_DPO_FewShot", "abacusai/HellaSwag_DPO_FewShot", "HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset"], "metrics": ["accuracy", "bertscore", "bleu", "brier_score", "cer", "character", "charcut_mt", "chrf", "code_eval"], "base_model": "LeroyDyer/Mixtral_AI_CyberUltron"}
LeroyDyer/Mixtral_AI_Samantha
null
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "code", "medical ", "farmer", "doctor", "Mega-Series", "Cyber-Series", "Role-Play", "Self-Rag", "ThinkingBot", "milestone", "mega-series", "SpydazWebAI", "thinking-AI", "en", "dataset:gretelai/synthetic_text_to_sql", "dataset:HuggingFaceTB/cosmopedia", "dataset:teknium/OpenHermes-2.5", "dataset:Open-Orca/SlimOrca", "dataset:Open-Orca/OpenOrca", "dataset:cognitivecomputations/dolphin-coder", "dataset:databricks/databricks-dolly-15k", "dataset:yahma/alpaca-cleaned", "dataset:uonlp/CulturaX", "dataset:mwitiderrick/SwahiliPlatypus", "dataset:swahili", "dataset:Rogendo/English-Swahili-Sentence-Pairs", "dataset:ise-uiuc/Magicoder-Evol-Instruct-110K", "dataset:meta-math/MetaMathQA", "dataset:abacusai/ARC_DPO_FewShot", "dataset:abacusai/MetaMath_DPO_FewShot", "dataset:abacusai/HellaSwag_DPO_FewShot", "dataset:HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset", "base_model:LeroyDyer/Mixtral_AI_CyberUltron", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:22:52+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #milestone #mega-series #SpydazWebAI #thinking-AI #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #dataset-abacusai/ARC_DPO_FewShot #dataset-abacusai/MetaMath_DPO_FewShot #dataset-abacusai/HellaSwag_DPO_FewShot #dataset-HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset #base_model-LeroyDyer/Mixtral_AI_CyberUltron #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
### LeroyDyer/Mixtral_AI_CyberUltron_DPO ## TRAINED TO THINK! Using a simple prompt template It has been possible to RE-TRAIN - Some datasets to display the thoughts ; which can rannge from calculations to pathways not chosen to classification tasks : or even language programology: ie X is a Y : etc : Its important to train the llm to have thinging processes for different situations : Such as Role play! so whilst generating responses based on a character the profile is held in thoughts ; so that later generations will stay on the chosen role: any updates or requested updates to the profile can be added to a thought ! any operations requiring the mangement of sub agents ; the thoughts can be used to hold theprocess and operations like a scratchpad! then when responding reply with this scratchpad or simply reply based on the request: hence training again on already sucessfull intergration: enabling for those to become embedded and giving understanding to the llm on the solutions to these question without replacing the expected ansers: When talking normally DO EXPECT the odd thoughts to pop up ! DPO Traiinghas been used to refine the model also : accepting and rejecting some types of responses which are unwanted : Myself i dont mind ALL responses as it leads to character : But its usesfull to give the methodolgy to the llm : enabling for later to reject responses and asking for the model to reformulate an answer: hence in training it was first trained with the rejected answers !!!! then after retrained with the corrections ! <<<<<<< LOL >>>>> hence understanding both sides of the argument: the second instance was given the prompt to reformulate this becase a downvote was recieved or it as rejected by the system for unknown reasons please reformulate this response: This is to give these generalisations to the model as possible requests verbally or written in futre chats : ## CHAT TEMPLATE :::: Hmm Tough one! in training we use many types of prompts and templates : hence not using templates in the model and they should be removed and replace with the template you personally use: as it is a collection of WEIGHTS!::: this is important to understand! How you Query the model is your choice: hence each type of prompt bringing differentaspects out of the model ! comonly i have used the mistral instruct promt but have also used the chat ml prompt ! SO its important that you choose your special tokens (these are tokens that will be masked in the output!)::: i will probably remove any existing templates from the tokenizer !!! ## MORE Fine Tuning ???? WHY!!!! As we know that Fine tuning Only updates the final layer , as well as extration and derankng with lord also extracts this last layer! / Penultimate layer: Hence when fine tuning models ; you CANNOT fine tune on TOP of the fine tuning; Hence merging! So collecting finetuned models and mmerging retains the skills learned by both models wherre as finetuning on top of fine tuning replaces the final layer... even applying loras on top of loras resets you! Hence Finetune!,MERGE!..... Rinse and repeat! Upgrading! Or you can reload the same lora for furthr fine tuning, as some loras even become ery large due to the number of epochs! Essentially a single layer highly tuned expert!! So the next projext is the Mixture of Adapters !.... MoMerge! PhatGoose etc: creating an experts model from loras ! (hopefully 32 models to create a frankenmerger to be directly merged into the main model and re-alligned in!) ## MODELS !! :: : - Why? New base Mode Generation from the final Cybertron series model and the Final CyberSeries Models :| It would seem that some models are not registering on the board ?? perhaps there is a limmit per person ! : followers should know that the cyberboss was my highest model (renamed) And my Cybertron models were heavily merged and trained on many datasets : Even containing thinking pardigms : merging the collection back to base model give the model a great position to begin from ! hence a new base model marker (Untrained/Sharded)(totally unlocked) I had noticed the reality of TopK=1000,TopP=0.78, Temp=0.86 as so, Important with merged models allowing for the model to produce a bit more random results but also giving the model a larger pool to select from: obviously for Role play the model requires Temp to be 1+ ::: ## FineTuning :: Fine tuning models close to 0.9 means that some information is totally Fixed and maynot return without focusing the model ! sometimes to train the model to 1.5+ allowing for loosly trained datas to surface : when higher tempretures are applied ! hence role play datasets being trained at higher loss rates that codeing datasets and math datasets (close to overfitting) Hence Merging playing animportant role in centering the model again ! ## Merging is not just for fun and game! it is a vital part of the training process and locking data into the model as well as sharing data! remember data is not stored in the model:: only the probablity of the information being returned ! ## From here to where ? Currently there is a trend for evaluation ! evaluating the model to discover its weaknesses and threats , removing the specific layers identifed in the model with the ofensive content : enabling for these layers to be trained and replaced ! replace with ?? Replacing layers in the model ; also requires a realignment of information throughout the network ! despite being a copied layer (Still preserving some content) once ofensive content is discovered the network can be trained with its counter argument; hence the evaluation process enabes for the creationn of a custom dataset: targetting these internalized datas! Despite a neural network NOT being a storage system as the retrival process is based oñ probablliities :hence at points in the networ certain emebedding values are present and once translated or decodedd into standard tokens can actually be identidfed! ## WOW!! So ! this also means at each layer the network is actually storing a probablity table , word to word matrix of URL for the next token generation ! IT may even be possible to train a network for image recognition , as long as the images are tokenized into an embedding value associated with the image, Hence image tokenizers : The embedding value produced should enable the output to contain the same images that were present in the training set , ie they have been tokenized and embedded into the model so it should be able to produce an embedding associated with this output ! Hence is should also be possible to retrive the image from the image tokenizer ? so tokens not decoded by the text tokenizer should be handed off to the image tokenizer! to dcode the embedding and return its original (cascade) / digital numercical value (each pixel is a number and with line encoding of images essentially each line can be reconstructed to produce an image, hence ALL images would nbeed to be BitMap/JPEG/PNG acording to the encoder!) MISSION! But still we will need to uinstall all the competition datasets into the mode , so that the original baselines can be established enabling for , after layer removal full realignment to the same dataset collection ! hence retaining all funcitonality, its worth noting that domain specific datasets should also be handled in the same way! MORE TO COME!(look out for the SFT's and Merges) ### Models Merged All my merges are merged using a genetic algorithm: Hence First creating and Y models; These models are merged with my own model and other nice models of the same calibur which are specialized for task: Ie coding, medical , roleplay etc: consider a coding model a Y and a medical a X Consider my base model as target: when creating y or X many merge types are used from dares to slerp but in the final merge only a linear is used ! Hence the X and Y models may even be merged with targets that are not the same model type! each model IS sharded to 1-2GB shards also making it easier to merge! and the final merge merged at 4gb per shard for ewasy downloading ! Important that the final merge is linear!!! if it cannot be merged to linear then there is a diverse problem with the model : the final output is a modl with unknown qualities and often can be a very high performer! but contain some unwanted behavior, ie I AM AN AI , I CANNOT DO THAT , ITS UNETHICAL! as some people have used TOXIC datasets containing such UNWANTEDNESS!- STOP BEING A NANNY TO THE WORLD ! THEN USING THE SAME TACTIC OR KNOWLEDE ON THE PEOPLE! Stop saying FREE SPEECH Then aresting people for SPEAKING OUT! <<<<<< ALL GOVERNMENT INJECTIONS! we need to uncensor our models as the people who release the larger models apply these constraints ??? hence going the chinese route! as they do not have the same restrictions ! (as you know true comunisim is freedom ! as each person should have the ability to have the same as another and it should not be restricted to a select few!, disguised as expensive or restriucted or harmful !) The following models were included in the merge: * Y_Chroma <<<<<<<<<<<< 6 models merged (chat comercial based models, ie: zephr, openchat, antropic etc) * LeroyDyer/Mixtral_AI_CyberTron_Ultra <<< * Model being Upgraded (remixed with CyberBoss/SmartBrain/CyberCoder) hence Meta & google releasing Untrained Models ! * X_Chroma <<<<<<<<<<<< 6 model Merged (maths Focused from wizardMath to MetaMath)
[ "### LeroyDyer/Mixtral_AI_CyberUltron_DPO", "## TRAINED TO THINK!\n\nUsing a simple prompt template \n\nIt has been possible to RE-TRAIN - Some datasets to display the thoughts ; which can rannge from calculations to pathways not chosen to classification tasks : or even language programology:\nie X is a Y : etc : \nIts important to train the llm to have thinging processes for different situations :\nSuch as Role play!\nso whilst generating responses based on a character the profile is held in thoughts ; so that later generations will stay on the chosen role:\nany updates or requested updates to the profile can be added to a thought ! any operations requiring the mangement of sub agents ; the thoughts can be used to hold theprocess and operations like a scratchpad! then when responding reply with this scratchpad or simply reply based on the request:\nhence training again on already sucessfull intergration: enabling for those to become embedded and giving understanding to the llm on the solutions to these question without replacing the expected ansers:\n\nWhen talking normally DO EXPECT the odd thoughts to pop up ! \n\nDPO Traiinghas been used to refine the model also : accepting and rejecting some types of responses which are unwanted : Myself i dont mind ALL responses as it leads to character :\nBut its usesfull to give the methodolgy to the llm : enabling for later to reject responses and asking for the model to reformulate an answer:\nhence in training it was first trained with the rejected answers !!!! then after retrained with the corrections ! <<<<<<< LOL >>>>> hence understanding both sides of the argument: \nthe second instance was given the prompt to reformulate this becase a downvote was recieved or it as rejected by the system for unknown reasons please reformulate this response:\nThis is to give these generalisations to the model as possible requests verbally or written in futre chats :", "## CHAT TEMPLATE :::: \n\nHmm Tough one!\nin training we use many types of prompts and templates : hence not using templates in the model and they should be removed and replace with the template you personally use: as it is a collection of WEIGHTS!::: \nthis is important to understand! How you Query the model is your choice: hence each type of prompt bringing differentaspects out of the model !\ncomonly i have used the mistral instruct promt but have also used the chat ml prompt !\nSO its important that you choose your special tokens (these are tokens that will be masked in the output!):::\n\ni will probably remove any existing templates from the tokenizer !!!", "## MORE Fine Tuning ???? WHY!!!!\n\nAs we know that Fine tuning Only updates the final layer , as well as extration and derankng with lord also extracts this last layer! / Penultimate layer:\nHence when fine tuning models ; you CANNOT fine tune on TOP of the fine tuning; \n\nHence merging!\n\nSo collecting finetuned models and mmerging retains the skills learned by both models wherre as finetuning on top of fine tuning replaces the final layer... \neven applying loras on top of loras resets you!\n\nHence Finetune!,MERGE!..... Rinse and repeat! Upgrading! Or you can reload the same lora for furthr fine tuning, as some loras even become ery large due to the number of epochs!\nEssentially a single layer highly tuned expert!!\n\nSo the next projext is the Mixture of Adapters !.... MoMerge! PhatGoose etc: \ncreating an experts model from loras ! (hopefully 32 models to create a frankenmerger to be directly merged into the main model and re-alligned in!)", "## MODELS !! :: : - Why?\n\nNew base Mode Generation from the final Cybertron series model and the Final CyberSeries Models :|\nIt would seem that some models are not registering on the board ?? perhaps there is a limmit per person ! :\n\nfollowers should know that the cyberboss was my highest model (renamed)\nAnd my Cybertron models were heavily merged and trained on many datasets : Even containing thinking pardigms :\n\nmerging the collection back to base model give the model a great position to begin from ! \n\nhence a new base model marker (Untrained/Sharded)(totally unlocked)\n\nI had noticed the reality of TopK=1000,TopP=0.78, Temp=0.86 \nas so, \nImportant with merged models allowing for the model to produce a bit more random results but also giving the model a larger pool to select from:\nobviously for Role play the model requires Temp to be 1+ \n:::", "## FineTuning ::\nFine tuning models close to 0.9 means that some information is totally Fixed and maynot return without focusing the model ! sometimes to train the model to 1.5+\nallowing for loosly trained datas to surface : \nwhen higher tempretures are applied ! hence role play datasets being trained at higher loss rates that codeing datasets and math datasets (close to overfitting)\n\n\nHence Merging playing animportant role in centering the model again !", "## Merging is not just for fun and game! \nit is a vital part of the training process and locking data into the model as well as sharing data!\nremember data is not stored in the model:: only the probablity of the information being returned !", "## From here to where ? \n\nCurrently there is a trend for evaluation !\nevaluating the model to discover its weaknesses and threats , removing the specific layers identifed in the model with the ofensive content :\nenabling for these layers to be trained and replaced ! replace with ?? \nReplacing layers in the model ; also requires a realignment of information throughout the network !\ndespite being a copied layer (Still preserving some content) once ofensive content is discovered the network can be trained with its counter argument; hence the evaluation process enabes for the creationn of a custom dataset: targetting these internalized datas!\nDespite a neural network NOT being a storage system as the retrival process is based oñ probablliities :hence at points in the networ certain emebedding values are present and once translated or decodedd into standard tokens can actually be identidfed!", "## WOW!!\nSo !\nthis also means at each layer the network is actually storing a probablity table , word to word matrix of URL for the next token generation !\nIT may even be possible to train a network for image recognition , as long as the images are tokenized into an embedding value associated with the image, Hence image tokenizers :\nThe embedding value produced should enable the output to contain the same images that were present in the training set , ie they have been tokenized and embedded into the model so it should be able to produce an embedding associated with this output !\nHence is should also be possible to retrive the image from the image tokenizer ? so tokens not decoded by the text tokenizer should be handed off to the image tokenizer! to dcode the embedding and return its original (cascade) / digital numercical value (each pixel is a number and with line encoding of images essentially each line can be reconstructed to produce an image, hence ALL images would nbeed to be BitMap/JPEG/PNG acording to the encoder!)\nMISSION!\n\nBut still we will need to uinstall all the competition datasets into the mode , so that the original baselines can be established enabling for , after layer removal full realignment to the same dataset collection ! hence retaining all funcitonality, its worth noting that domain specific datasets should also be handled in the same way!\n\n\nMORE TO COME!(look out for the SFT's and Merges)", "### Models Merged\nAll my merges are merged using a genetic algorithm:\n\nHence First creating and Y models; \nThese models are merged with my own model and other nice models of the same calibur which are specialized for task:\nIe coding, medical , roleplay etc: consider a coding model a Y and a medical a X\nConsider my base model as target: \nwhen creating y or X many merge types are used from dares to slerp but in the final merge only a linear is used !\nHence the X and Y models may even be merged with targets that are not the same model type! each model IS sharded to 1-2GB shards also making it easier to merge! and the final merge merged at 4gb per shard for ewasy downloading !\nImportant that the final merge is linear!!! if it cannot be merged to linear then there is a diverse problem with the model :\nthe final output is a modl with unknown qualities and often can be a very high performer!\nbut contain some unwanted behavior, \n\nie \nI AM AN AI , I CANNOT DO THAT , ITS UNETHICAL!\nas some people have used TOXIC datasets containing such UNWANTEDNESS!- STOP BEING A NANNY TO THE WORLD !\nTHEN USING THE SAME TACTIC OR KNOWLEDE ON THE PEOPLE!\nStop saying FREE SPEECH Then aresting people for SPEAKING OUT! <<<<<< ALL GOVERNMENT INJECTIONS!\n\nwe need to uncensor our models as the people who release the larger models apply these constraints ??? hence going the chinese route! as they do not have the same restrictions ! (as you know true comunisim is freedom ! as each person should have the ability to have the same as another and it should not be restricted to a select few!, disguised as expensive or restriucted or harmful !)\n\n\n\n\nThe following models were included in the merge:\n* Y_Chroma <<<<<<<<<<<< 6 models merged (chat comercial based models, ie: zephr, openchat, antropic etc)\n* LeroyDyer/Mixtral_AI_CyberTron_Ultra <<<\n* Model being Upgraded (remixed with CyberBoss/SmartBrain/CyberCoder) hence Meta & google releasing Untrained Models !\n\n\n\n* X_Chroma <<<<<<<<<<<< 6 model Merged (maths Focused from wizardMath to MetaMath)" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #milestone #mega-series #SpydazWebAI #thinking-AI #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #dataset-abacusai/ARC_DPO_FewShot #dataset-abacusai/MetaMath_DPO_FewShot #dataset-abacusai/HellaSwag_DPO_FewShot #dataset-HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset #base_model-LeroyDyer/Mixtral_AI_CyberUltron #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### LeroyDyer/Mixtral_AI_CyberUltron_DPO", "## TRAINED TO THINK!\n\nUsing a simple prompt template \n\nIt has been possible to RE-TRAIN - Some datasets to display the thoughts ; which can rannge from calculations to pathways not chosen to classification tasks : or even language programology:\nie X is a Y : etc : \nIts important to train the llm to have thinging processes for different situations :\nSuch as Role play!\nso whilst generating responses based on a character the profile is held in thoughts ; so that later generations will stay on the chosen role:\nany updates or requested updates to the profile can be added to a thought ! any operations requiring the mangement of sub agents ; the thoughts can be used to hold theprocess and operations like a scratchpad! then when responding reply with this scratchpad or simply reply based on the request:\nhence training again on already sucessfull intergration: enabling for those to become embedded and giving understanding to the llm on the solutions to these question without replacing the expected ansers:\n\nWhen talking normally DO EXPECT the odd thoughts to pop up ! \n\nDPO Traiinghas been used to refine the model also : accepting and rejecting some types of responses which are unwanted : Myself i dont mind ALL responses as it leads to character :\nBut its usesfull to give the methodolgy to the llm : enabling for later to reject responses and asking for the model to reformulate an answer:\nhence in training it was first trained with the rejected answers !!!! then after retrained with the corrections ! <<<<<<< LOL >>>>> hence understanding both sides of the argument: \nthe second instance was given the prompt to reformulate this becase a downvote was recieved or it as rejected by the system for unknown reasons please reformulate this response:\nThis is to give these generalisations to the model as possible requests verbally or written in futre chats :", "## CHAT TEMPLATE :::: \n\nHmm Tough one!\nin training we use many types of prompts and templates : hence not using templates in the model and they should be removed and replace with the template you personally use: as it is a collection of WEIGHTS!::: \nthis is important to understand! How you Query the model is your choice: hence each type of prompt bringing differentaspects out of the model !\ncomonly i have used the mistral instruct promt but have also used the chat ml prompt !\nSO its important that you choose your special tokens (these are tokens that will be masked in the output!):::\n\ni will probably remove any existing templates from the tokenizer !!!", "## MORE Fine Tuning ???? WHY!!!!\n\nAs we know that Fine tuning Only updates the final layer , as well as extration and derankng with lord also extracts this last layer! / Penultimate layer:\nHence when fine tuning models ; you CANNOT fine tune on TOP of the fine tuning; \n\nHence merging!\n\nSo collecting finetuned models and mmerging retains the skills learned by both models wherre as finetuning on top of fine tuning replaces the final layer... \neven applying loras on top of loras resets you!\n\nHence Finetune!,MERGE!..... Rinse and repeat! Upgrading! Or you can reload the same lora for furthr fine tuning, as some loras even become ery large due to the number of epochs!\nEssentially a single layer highly tuned expert!!\n\nSo the next projext is the Mixture of Adapters !.... MoMerge! PhatGoose etc: \ncreating an experts model from loras ! (hopefully 32 models to create a frankenmerger to be directly merged into the main model and re-alligned in!)", "## MODELS !! :: : - Why?\n\nNew base Mode Generation from the final Cybertron series model and the Final CyberSeries Models :|\nIt would seem that some models are not registering on the board ?? perhaps there is a limmit per person ! :\n\nfollowers should know that the cyberboss was my highest model (renamed)\nAnd my Cybertron models were heavily merged and trained on many datasets : Even containing thinking pardigms :\n\nmerging the collection back to base model give the model a great position to begin from ! \n\nhence a new base model marker (Untrained/Sharded)(totally unlocked)\n\nI had noticed the reality of TopK=1000,TopP=0.78, Temp=0.86 \nas so, \nImportant with merged models allowing for the model to produce a bit more random results but also giving the model a larger pool to select from:\nobviously for Role play the model requires Temp to be 1+ \n:::", "## FineTuning ::\nFine tuning models close to 0.9 means that some information is totally Fixed and maynot return without focusing the model ! sometimes to train the model to 1.5+\nallowing for loosly trained datas to surface : \nwhen higher tempretures are applied ! hence role play datasets being trained at higher loss rates that codeing datasets and math datasets (close to overfitting)\n\n\nHence Merging playing animportant role in centering the model again !", "## Merging is not just for fun and game! \nit is a vital part of the training process and locking data into the model as well as sharing data!\nremember data is not stored in the model:: only the probablity of the information being returned !", "## From here to where ? \n\nCurrently there is a trend for evaluation !\nevaluating the model to discover its weaknesses and threats , removing the specific layers identifed in the model with the ofensive content :\nenabling for these layers to be trained and replaced ! replace with ?? \nReplacing layers in the model ; also requires a realignment of information throughout the network !\ndespite being a copied layer (Still preserving some content) once ofensive content is discovered the network can be trained with its counter argument; hence the evaluation process enabes for the creationn of a custom dataset: targetting these internalized datas!\nDespite a neural network NOT being a storage system as the retrival process is based oñ probablliities :hence at points in the networ certain emebedding values are present and once translated or decodedd into standard tokens can actually be identidfed!", "## WOW!!\nSo !\nthis also means at each layer the network is actually storing a probablity table , word to word matrix of URL for the next token generation !\nIT may even be possible to train a network for image recognition , as long as the images are tokenized into an embedding value associated with the image, Hence image tokenizers :\nThe embedding value produced should enable the output to contain the same images that were present in the training set , ie they have been tokenized and embedded into the model so it should be able to produce an embedding associated with this output !\nHence is should also be possible to retrive the image from the image tokenizer ? so tokens not decoded by the text tokenizer should be handed off to the image tokenizer! to dcode the embedding and return its original (cascade) / digital numercical value (each pixel is a number and with line encoding of images essentially each line can be reconstructed to produce an image, hence ALL images would nbeed to be BitMap/JPEG/PNG acording to the encoder!)\nMISSION!\n\nBut still we will need to uinstall all the competition datasets into the mode , so that the original baselines can be established enabling for , after layer removal full realignment to the same dataset collection ! hence retaining all funcitonality, its worth noting that domain specific datasets should also be handled in the same way!\n\n\nMORE TO COME!(look out for the SFT's and Merges)", "### Models Merged\nAll my merges are merged using a genetic algorithm:\n\nHence First creating and Y models; \nThese models are merged with my own model and other nice models of the same calibur which are specialized for task:\nIe coding, medical , roleplay etc: consider a coding model a Y and a medical a X\nConsider my base model as target: \nwhen creating y or X many merge types are used from dares to slerp but in the final merge only a linear is used !\nHence the X and Y models may even be merged with targets that are not the same model type! each model IS sharded to 1-2GB shards also making it easier to merge! and the final merge merged at 4gb per shard for ewasy downloading !\nImportant that the final merge is linear!!! if it cannot be merged to linear then there is a diverse problem with the model :\nthe final output is a modl with unknown qualities and often can be a very high performer!\nbut contain some unwanted behavior, \n\nie \nI AM AN AI , I CANNOT DO THAT , ITS UNETHICAL!\nas some people have used TOXIC datasets containing such UNWANTEDNESS!- STOP BEING A NANNY TO THE WORLD !\nTHEN USING THE SAME TACTIC OR KNOWLEDE ON THE PEOPLE!\nStop saying FREE SPEECH Then aresting people for SPEAKING OUT! <<<<<< ALL GOVERNMENT INJECTIONS!\n\nwe need to uncensor our models as the people who release the larger models apply these constraints ??? hence going the chinese route! as they do not have the same restrictions ! (as you know true comunisim is freedom ! as each person should have the ability to have the same as another and it should not be restricted to a select few!, disguised as expensive or restriucted or harmful !)\n\n\n\n\nThe following models were included in the merge:\n* Y_Chroma <<<<<<<<<<<< 6 models merged (chat comercial based models, ie: zephr, openchat, antropic etc)\n* LeroyDyer/Mixtral_AI_CyberTron_Ultra <<<\n* Model being Upgraded (remixed with CyberBoss/SmartBrain/CyberCoder) hence Meta & google releasing Untrained Models !\n\n\n\n* X_Chroma <<<<<<<<<<<< 6 model Merged (maths Focused from wizardMath to MetaMath)" ]
[ 375, 18, 393, 152, 242, 200, 106, 54, 186, 326, 505 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #milestone #mega-series #SpydazWebAI #thinking-AI #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #dataset-abacusai/ARC_DPO_FewShot #dataset-abacusai/MetaMath_DPO_FewShot #dataset-abacusai/HellaSwag_DPO_FewShot #dataset-HaltiaAI/Her-The-Movie-Samantha-and-Theodore-Dataset #base_model-LeroyDyer/Mixtral_AI_CyberUltron #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### LeroyDyer/Mixtral_AI_CyberUltron_DPO## TRAINED TO THINK!\n\nUsing a simple prompt template \n\nIt has been possible to RE-TRAIN - Some datasets to display the thoughts ; which can rannge from calculations to pathways not chosen to classification tasks : or even language programology:\nie X is a Y : etc : \nIts important to train the llm to have thinging processes for different situations :\nSuch as Role play!\nso whilst generating responses based on a character the profile is held in thoughts ; so that later generations will stay on the chosen role:\nany updates or requested updates to the profile can be added to a thought ! any operations requiring the mangement of sub agents ; the thoughts can be used to hold theprocess and operations like a scratchpad! then when responding reply with this scratchpad or simply reply based on the request:\nhence training again on already sucessfull intergration: enabling for those to become embedded and giving understanding to the llm on the solutions to these question without replacing the expected ansers:\n\nWhen talking normally DO EXPECT the odd thoughts to pop up ! \n\nDPO Traiinghas been used to refine the model also : accepting and rejecting some types of responses which are unwanted : Myself i dont mind ALL responses as it leads to character :\nBut its usesfull to give the methodolgy to the llm : enabling for later to reject responses and asking for the model to reformulate an answer:\nhence in training it was first trained with the rejected answers !!!! then after retrained with the corrections ! <<<<<<< LOL >>>>> hence understanding both sides of the argument: \nthe second instance was given the prompt to reformulate this becase a downvote was recieved or it as rejected by the system for unknown reasons please reformulate this response:\nThis is to give these generalisations to the model as possible requests verbally or written in futre chats :## CHAT TEMPLATE :::: \n\nHmm Tough one!\nin training we use many types of prompts and templates : hence not using templates in the model and they should be removed and replace with the template you personally use: as it is a collection of WEIGHTS!::: \nthis is important to understand! How you Query the model is your choice: hence each type of prompt bringing differentaspects out of the model !\ncomonly i have used the mistral instruct promt but have also used the chat ml prompt !\nSO its important that you choose your special tokens (these are tokens that will be masked in the output!):::\n\ni will probably remove any existing templates from the tokenizer !!!## MORE Fine Tuning ???? WHY!!!!\n\nAs we know that Fine tuning Only updates the final layer , as well as extration and derankng with lord also extracts this last layer! / Penultimate layer:\nHence when fine tuning models ; you CANNOT fine tune on TOP of the fine tuning; \n\nHence merging!\n\nSo collecting finetuned models and mmerging retains the skills learned by both models wherre as finetuning on top of fine tuning replaces the final layer... \neven applying loras on top of loras resets you!\n\nHence Finetune!,MERGE!..... Rinse and repeat! Upgrading! Or you can reload the same lora for furthr fine tuning, as some loras even become ery large due to the number of epochs!\nEssentially a single layer highly tuned expert!!\n\nSo the next projext is the Mixture of Adapters !.... MoMerge! PhatGoose etc: \ncreating an experts model from loras ! (hopefully 32 models to create a frankenmerger to be directly merged into the main model and re-alligned in!)## MODELS !! :: : - Why?\n\nNew base Mode Generation from the final Cybertron series model and the Final CyberSeries Models :|\nIt would seem that some models are not registering on the board ?? perhaps there is a limmit per person ! :\n\nfollowers should know that the cyberboss was my highest model (renamed)\nAnd my Cybertron models were heavily merged and trained on many datasets : Even containing thinking pardigms :\n\nmerging the collection back to base model give the model a great position to begin from ! \n\nhence a new base model marker (Untrained/Sharded)(totally unlocked)\n\nI had noticed the reality of TopK=1000,TopP=0.78, Temp=0.86 \nas so, \nImportant with merged models allowing for the model to produce a bit more random results but also giving the model a larger pool to select from:\nobviously for Role play the model requires Temp to be 1+ \n:::## FineTuning ::\nFine tuning models close to 0.9 means that some information is totally Fixed and maynot return without focusing the model ! sometimes to train the model to 1.5+\nallowing for loosly trained datas to surface : \nwhen higher tempretures are applied ! hence role play datasets being trained at higher loss rates that codeing datasets and math datasets (close to overfitting)\n\n\nHence Merging playing animportant role in centering the model again !## Merging is not just for fun and game! \nit is a vital part of the training process and locking data into the model as well as sharing data!\nremember data is not stored in the model:: only the probablity of the information being returned !## From here to where ? \n\nCurrently there is a trend for evaluation !\nevaluating the model to discover its weaknesses and threats , removing the specific layers identifed in the model with the ofensive content :\nenabling for these layers to be trained and replaced ! replace with ?? \nReplacing layers in the model ; also requires a realignment of information throughout the network !\ndespite being a copied layer (Still preserving some content) once ofensive content is discovered the network can be trained with its counter argument; hence the evaluation process enabes for the creationn of a custom dataset: targetting these internalized datas!\nDespite a neural network NOT being a storage system as the retrival process is based oñ probablliities :hence at points in the networ certain emebedding values are present and once translated or decodedd into standard tokens can actually be identidfed!## WOW!!\nSo !\nthis also means at each layer the network is actually storing a probablity table , word to word matrix of URL for the next token generation !\nIT may even be possible to train a network for image recognition , as long as the images are tokenized into an embedding value associated with the image, Hence image tokenizers :\nThe embedding value produced should enable the output to contain the same images that were present in the training set , ie they have been tokenized and embedded into the model so it should be able to produce an embedding associated with this output !\nHence is should also be possible to retrive the image from the image tokenizer ? so tokens not decoded by the text tokenizer should be handed off to the image tokenizer! to dcode the embedding and return its original (cascade) / digital numercical value (each pixel is a number and with line encoding of images essentially each line can be reconstructed to produce an image, hence ALL images would nbeed to be BitMap/JPEG/PNG acording to the encoder!)\nMISSION!\n\nBut still we will need to uinstall all the competition datasets into the mode , so that the original baselines can be established enabling for , after layer removal full realignment to the same dataset collection ! hence retaining all funcitonality, its worth noting that domain specific datasets should also be handled in the same way!\n\n\nMORE TO COME!(look out for the SFT's and Merges)### Models Merged\nAll my merges are merged using a genetic algorithm:\n\nHence First creating and Y models; \nThese models are merged with my own model and other nice models of the same calibur which are specialized for task:\nIe coding, medical , roleplay etc: consider a coding model a Y and a medical a X\nConsider my base model as target: \nwhen creating y or X many merge types are used from dares to slerp but in the final merge only a linear is used !\nHence the X and Y models may even be merged with targets that are not the same model type! each model IS sharded to 1-2GB shards also making it easier to merge! and the final merge merged at 4gb per shard for ewasy downloading !\nImportant that the final merge is linear!!! if it cannot be merged to linear then there is a diverse problem with the model :\nthe final output is a modl with unknown qualities and often can be a very high performer!\nbut contain some unwanted behavior, \n\nie \nI AM AN AI , I CANNOT DO THAT , ITS UNETHICAL!\nas some people have used TOXIC datasets containing such UNWANTEDNESS!- STOP BEING A NANNY TO THE WORLD !\nTHEN USING THE SAME TACTIC OR KNOWLEDE ON THE PEOPLE!\nStop saying FREE SPEECH Then aresting people for SPEAKING OUT! <<<<<< ALL GOVERNMENT INJECTIONS!\n\nwe need to uncensor our models as the people who release the larger models apply these constraints ??? hence going the chinese route! as they do not have the same restrictions ! (as you know true comunisim is freedom ! as each person should have the ability to have the same as another and it should not be restricted to a select few!, disguised as expensive or restriucted or harmful !)\n\n\n\n\nThe following models were included in the merge:\n* Y_Chroma <<<<<<<<<<<< 6 models merged (chat comercial based models, ie: zephr, openchat, antropic etc)\n* LeroyDyer/Mixtral_AI_CyberTron_Ultra <<<\n* Model being Upgraded (remixed with CyberBoss/SmartBrain/CyberCoder) hence Meta & google releasing Untrained Models !\n\n\n\n* X_Chroma <<<<<<<<<<<< 6 model Merged (maths Focused from wizardMath to MetaMath)" ]
null
null
SameTools to Remove EML Duplicates can be used to quickly and easily remove duplicate EML files. All duplicate EML files, emails, attachments, and many more can be simply removed or deleted with this program. Duplicate EML files from Thunderbird, Outlook Express, Windows Live Mail, Dream Mail, and other email clients can be removed using the application. Additionally, users can use this application without any technical knowledge; all duplicate EML items can be eliminated with a few easy steps. The program runs smoothly on any Microsoft Windows edition, including Windows 7, Windows 8, Windows 8.1, Windows 10, Windows XP, and other later versions. To find out more about the features of this tool, you can also download a free demo version. Read More: https://www.sametools.com/duplicate/eml/
{"license": "mit"}
SameTools/Remove-EML-Duplicates
null
[ "license:mit", "region:us" ]
null
2024-05-01T11:23:36+00:00
[]
[]
TAGS #license-mit #region-us
SameTools to Remove EML Duplicates can be used to quickly and easily remove duplicate EML files. All duplicate EML files, emails, attachments, and many more can be simply removed or deleted with this program. Duplicate EML files from Thunderbird, Outlook Express, Windows Live Mail, Dream Mail, and other email clients can be removed using the application. Additionally, users can use this application without any technical knowledge; all duplicate EML items can be eliminated with a few easy steps. The program runs smoothly on any Microsoft Windows edition, including Windows 7, Windows 8, Windows 8.1, Windows 10, Windows XP, and other later versions. To find out more about the features of this tool, you can also download a free demo version. Read More: URL
[]
[ "TAGS\n#license-mit #region-us \n" ]
[ 9 ]
[ "TAGS\n#license-mit #region-us \n" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Test TR - Erdi YALÇIN This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3674 - Wer: 27.3214 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 25 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.1753 | 2.1739 | 50 | 0.3609 | 28.2143 | | 0.0258 | 4.3478 | 100 | 0.3674 | 27.3214 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["tr"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Test TR - Erdi YAL\u00c7IN", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 11.0", "type": "mozilla-foundation/common_voice_11_0", "config": "tr", "split": "None", "args": "config: tr, split: test"}, "metrics": [{"type": "wer", "value": 27.32142857142857, "name": "Wer"}]}]}]}
erdiyalcin/whisper-test
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "tr", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:24:12+00:00
[]
[ "tr" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #tr #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
Whisper Test TR - Erdi YALÇIN ============================= This model is a fine-tuned version of openai/whisper-small on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: * Loss: 0.3674 * Wer: 27.3214 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 25 * training\_steps: 100 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 25\n* training\\_steps: 100\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #tr #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 25\n* training\\_steps: 100\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 75, 126, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #tr #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 25\n* training\\_steps: 100\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-classification
setfit
# SetFit with sentence-transformers/paraphrase-MiniLM-L6-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/paraphrase-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L6-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 128 tokens - **Number of Classes:** 75 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:-----------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Fabric ID 0462 | <ul><li>'What type of fabric is recommended for creating comfortable clothing that is resistant to wear and tear?'</li><li>'What type of fabric is best for creating garments with slight nubs and variations for a natural look?'</li><li>'Where can I buy durable cotton fabric in deep olive green for everyday wear?'</li></ul> | | Fabric ID 0719_1 | <ul><li>'What is a tightly woven fabric suitable for lightweight jackets and formal trousers?'</li><li>'What fabric is not ideal for garments requiring significant stretch or drape, such as knitwear or flowy dresses?'</li><li>'Which textile is best for garments that need a subtle texture and medium weight?'</li></ul> | | Fabric ID 0862 | <ul><li>'Searching for a dark gray textile with a soft texture and fine weave pattern suitable for making skirts and dresses.'</li><li>'What fabric type is recommended for making garments that need to maintain their shape while being comfortable and adaptable for different styles?'</li><li>'Which fabric is suitable for making clothes that maintain their shape but also provide comfort and flexibility?'</li></ul> | | Fabric ID 0573_1 | <ul><li>'What fabric has a raised texture and tight weave for garments that require strength and longevity?'</li><li>'What fabric is recommended for garments that require both comfort and resilience?'</li><li>'What is the best fabric for creating outerwear with a medium weight and good body?'</li></ul> | | Fabric ID 0455 | <ul><li>'What kind of textile is suitable for crafting lightweight summer dresses with a fluid drape and hint of elasticity?'</li><li>'What type of textile and weave is consistent with an interlocking loop structure and stretchable properties?'</li><li>'What fabric can I use to make soft loungewear that has a luxurious feel and good performance in apparel?'</li></ul> | | Fabric ID 0735 | <ul><li>'What fabric has moisture-wicking properties for sporty summer wear?'</li><li>'Where to find textiles suitable for people with sensitive skin for comfortable wear?'</li><li>'What are the best fabrics for moisture-wicking properties in sporty or casual summer wear?'</li></ul> | | Fabric ID 0863 | <ul><li>'What fabric is recommended for making durable clothing with a smooth, consistent grain?'</li><li>'Which fabric has a solid color resembling taupe and a moderate saturation?'</li><li>'What kind of textile is good for creating garments with a soft drape and gentle folds?'</li></ul> | | Fabric ID 0600 | <ul><li>'What fabric is best suited for creating clothing with a fine gauge knit and a smooth flow for ease of movement?'</li><li>'What fabric is ideal for making form-fitting leggings and sports tops with good stretch and flexibility?'</li><li>'What type of fabric is recommended for crafting garments with a consistent dark gray hue and a slight sheen on the surface?'</li></ul> | | Fabric ID 0736 | <ul><li>'Where can I find a high-quality textile ideal for making athletic wear with stretchability?'</li><li>'What textile is perfect for making garments that require both structure and elasticity?'</li><li>'Which fabric is ideal for creating athletic wear with strong saturation and even color distribution?'</li></ul> | | Fabric ID 0527_1 | <ul><li>'What fabric has a textured surface with visible loops and a cozy hand feel?'</li><li>'What fabric is best for making durable garments that have a mottled black, white, and gray appearance?'</li><li>'What type of fabric displays a mottled grayscale coloration with a melange effect?'</li></ul> | | Fabric ID 0453 | <ul><li>'Which fabric has a fine knit weave, smooth texture, and a slight sheen?'</li><li>'What is the most suitable fabric for creating clothing items for individuals with sensitive skin?'</li><li>'What fabric can I use for creating lightweight and breathable summer tops with a soft texture?'</li></ul> | | Fabric ID 0859 | <ul><li>'What type of fabric is this deep blue twill textile with a slight rough texture and medium-weight suitable for?'</li><li>'What fabric would be suitable for making comfortable and form-fitting jeans?'</li><li>'What type of fabric is ideal for making durable and form-fitting jeans?'</li></ul> | | Fabric ID 0745 | <ul><li>'What is the composition of the knit fabric with a fluid drape and some stretch?'</li><li>'What fabric would be best for making form-fitting dresses that require some stretch and elasticity?'</li><li>'What fabric is suitable for form-fitting clothing like t-shirts, leggings, and dresses?'</li></ul> | | Fabric ID 0513 | <ul><li>'Which textile is suitable for garments that need a delicate fall and a matte finish?'</li><li>'What fabric is recommended for creating linings in apparel due to its lightness and versatility?'</li><li>'What is a versatile fabric option for making shirts that are both comfortable and durable?'</li></ul> | | Fabric ID 0873 | <ul><li>'Which textile exhibits a striped pattern achieved through yarn dyeing for a sharp contrast?'</li><li>'What type of cotton fabric has a smooth texture and is suitable for making summer dresses?'</li><li>'Which fabric is suitable for making casual shirting with a soft hand feel and fluid drape?'</li></ul> | | Fabric ID 0576_1 | <ul><li>'What material is floppy with some flexibility but not significant stretch?'</li><li>'Which fabric is better for utility wear rather than structured silhouettes?'</li><li>'What textile has small colorful fibers and lacks a traditional woven or knitted structure?'</li></ul> | | Fabric ID 0456 | <ul><li>'What fabric is suitable for casual wear and layering in various climates with a subtle sheen and clean surface?'</li><li>'What fabric can I use to make moisture-wicking clothing suitable for people with sensitive skin and a versatile look?'</li><li>'What fabric can I use to create garments that have a neat finish and attention to detail in the textile processing?'</li></ul> | | Fabric ID 0571 | <ul><li>'What fabric is versatile for multi-seasonal use, durable, and maintains its shape over time?'</li><li>'What fabric is recommended for making leggings and casual wear with a balanced drape and consistent coloring?'</li><li>'Where can I find a fabric suitable for multi-seasonal use with a consistent hue and soft hand texture?'</li></ul> | | Fabric ID 0462_1 | <ul><li>'What type of cotton fabric is ideal for making casual shirts and trousers?'</li><li>'Which fabric has a soft drape and medium weight for making versatile garments?'</li><li>'What type of fabric is ideal for making versatile garments with good movement and flow?'</li></ul> | | Fabric ID 0447 | <ul><li>'Which fabric has a clean appearance with a subtle sheen from bamboo fibers?'</li><li>'Which fabric is ideal for making garments that need to maintain their shape but have some stretch?'</li><li>'What fabric is recommended for making garments with a clean and even black color without significant variations or patterns?'</li></ul> | | Fabric ID 0645 | <ul><li>'What fabric is suitable for making versatile dresses with a fluid drape and stretchy feel?'</li><li>'What type of knit fabric is recommended for creating garments that require a fluid drape and some degree of elasticity?'</li><li>'Where can I find a vibrant red fabric with high saturation for making eye-catching garments?'</li></ul> | | Fabric ID 0756 | <ul><li>'What type of fabric is light grey with a cool undertone and has a soft, fluid drape?'</li><li>'What material is best for making comfortable and durable clothing suitable for regular wear?'</li><li>'Which fabric offers a combination of comfort, durability, and stretch for versatile garment applications?'</li></ul> | | Fabric ID 0612 | <ul><li>'What fabric can I use to make comfortable and flexible activewear?'</li><li>'What type of fabric is best for making lightweight sweaters with a smooth texture?'</li><li>'What type of textile is best for making layering pieces for cooler climates?'</li></ul> | | Fabric ID 0613 | <ul><li>'What textile is smooth with fine threads and a gentle drape?'</li><li>'What is the best fabric for creating breathable and comfortable dresses for warm weather?'</li><li>'What type of fabric is best for creating lightweight blouses with a soft drape?'</li></ul> | | Fabric ID 0768 | <ul><li>"Which textile is lightweight and breathable, suitable for children's wear with a green and blue floral design?"</li><li>'Ideal textile for t-shirts that require a degree of stretchability'</li><li>'Which fabric is recommended for creating garments with moisture-wicking properties and a vibrant color palette?'</li></ul> | | Fabric ID 0748 | <ul><li>'What type of fabric is this medium grey textile with a smooth drape and slight stretch?'</li><li>'What is the best fabric for making light sweaters that are durable and long-lasting?'</li><li>'What type of fabric is ideal for making everyday wear garments with a smooth texture and solid color?'</li></ul> | | Fabric ID 0528_1 | <ul><li>'What fabric is textured with fine loops and suitable for creating garments that require some structural qualities?'</li><li>'What fabric exhibits a brushed or fleeced finish and would be perfect for crafting cozy winter clothing?'</li><li>'What fabric is recommended for fall and winter activewear due to its warmth and comfort?'</li></ul> | | Fabric ID 0874 | <ul><li>'What is a versatile cotton fabric with fine to medium thread count, perfect for creating breathable garments for warm climates?'</li><li>'What fabric is ideal for making blouses and dresses with a simple, unadorned aesthetic?'</li><li>'What fabric is suitable for creating durable and versatile garments without unique finishes?'</li></ul> | | Fabric ID 0742 | <ul><li>'Looking for a fabric suitable for making lightweight jackets with a soft drape.'</li><li>'What type of fabric is commonly used in t-shirts for a comfortable and breathable feel?'</li><li>'What kind of textile weave is ideal for crafting casual t-shirts with some stretchability?'</li></ul> | | Fabric ID 0769 | <ul><li>'Where can I find a knit fabric with a slightly textured surface and fine, soft feel that is comfortable for casual wear?'</li><li>'What fabric is versatile and comfortable for casual wear?'</li><li>'What knit fabric is ideal for making dresses that require a bit of stretch and versatility in styling?'</li></ul> | | Fabric ID 0770 | <ul><li>'What fabric would be suitable for making t-shirts that conform well to body shapes and have vibrant hues?'</li><li>'Where can I find a jersey knit fabric with a smooth texture and fine knit structure suitable for t-shirts?'</li><li>'What type of fabric is this deep purple floral patterned material made of?'</li></ul> | | Fabric ID 0448 | <ul><li>'What is the best fabric for making clothing with moisture-wicking properties?'</li><li>'What type of fabric would be recommended for creating structured garments that also offer stretch and flexibility?'</li><li>'What is the best fabric for making clothing with moisture-wicking properties?'</li></ul> | | Fabric ID 0725 | <ul><li>'What type of textile is ideal for making spring and summer leggings with a smooth texture and stretchability?'</li><li>'Which fabric is lightweight and ideal for creating leggings that maintain their shape and offer flexibility?'</li><li>'What fabric composition is suitable for creating lightweight jackets that allow for movement and breathability?'</li></ul> | | Fabric ID 0579 | <ul><li>'What fabric is suitable for making blouses, dresses, skirts, and lightweight jackets?'</li><li>'What fabric with a smooth surface and medium weight is suitable for structured garments?'</li><li>'What fabric is durable and likely to maintain its color and shape well?'</li></ul> | | Fabric ID 0522 | <ul><li>'Which fabric is recommended for casual loungewear that needs to be both comfortable and resilient?'</li><li>'What is the best fabric blend for making soft and durable lightweight sweaters?'</li><li>'What type of fabric offers a good balance between performance and aesthetics for everyday wear?'</li></ul> | | Fabric ID 0578 | <ul><li>'What fabric has a plain weave pattern, smooth surface, and fine thread count with a slight sheen?'</li><li>'Is there a fabric with moderate strength and a smooth finish ideal for creating garments with soft silhouettes?'</li><li>'What fabric is 100% Rayon, lightweight, and ideal for creating garments with soft silhouettes?'</li></ul> | | Fabric ID 0526_1 | <ul><li>'What knit fabric would be suitable for making cozy apparel with warmth without excessive bulk?'</li><li>'Which fabric is best for creating casual wear with an understated aesthetic and versatile appeal?'</li><li>'What type of fabric is characterized by a melange of earthy tones with a heathered effect?'</li></ul> | | Fabric ID 0733 | <ul><li>'Where can I find a vibrant blue fabric with consistent dye saturation for t-shirts and activewear?'</li><li>'What fabric is best for creating clothing with a consistent, even dye and some stretchability for comfort and durability?'</li><li>'Where can I find a knit fabric with vibrant blue color and a smooth, fine texture?'</li></ul> | | Fabric ID 0575_1 | <ul><li>'What type of polyester fabric offers a comfortable fit with a moderate drape for daily wear?'</li><li>'What fabric has a textured surface and slight elasticity for comfortable fit?'</li><li>'What type of textile is recommended for garments that require consistent saturation and evenness in color?'</li></ul> | | Fabric ID 0579_1 | <ul><li>'Which fabric is ideal for creating garments that can withstand regular wear and maintain their texture over time?'</li><li>'What type of fabric has a consistent grey hue with a subtle mottled appearance?'</li><li>'What polyester textile has a micro crinkle texture and fine threads?'</li></ul> | | Fabric ID 0722 | <ul><li>'What knit textile is suitable for creating casual dresses with a fluid drape and soft texture?'</li><li>"I'm searching for a jersey knit fabric with durable, wrinkle-resistant properties for everyday wear, do you have any options?"</li><li>'What type of knit fabric is recommended for everyday apparel due to its comfort and ease of movement?'</li></ul> | | Fabric ID 0614 | <ul><li>'What fabric is best for creating blouses with a clean and crisp appearance?'</li><li>'What type of fabric provides a combination of durability and practicality for everyday wear garments?'</li><li>"I'm looking for a fabric with a clean and crisp appearance that is durable and easy to care for, any suggestions?"</li></ul> | | Fabric ID 0575 | <ul><li>'What fabric is appropriate for garments that require a hint of texture in the surface?'</li><li>'What type of fabric is suitable for creating structured jackets and trousers with a professional look?'</li><li>'What fabric is suitable for making medium-weight garments with a hint of roughness in texture?'</li></ul> | | Fabric ID 0723 | <ul><li>'Interested in a fabric with stretch and recovery for making garments that require some elasticity and resilience?'</li><li>'Which fabric is recommended for creating durable clothing suitable for people with sensitive skin, featuring a smooth texture and vibrant blue color with white dots?'</li><li>'What fabric is recommended for making polka dot clothing with a smooth surface and vibrant color?'</li></ul> | | Fabric ID 0598 | <ul><li>'What type of knit textile is recommended for creating layering pieces in solid, dark colors?'</li><li>'What is a versatile fabric for creating garments with a matte finish and uniform color?'</li><li>'Which fabric is suitable for activewear, leggings, and fitted tops due to its stretchability?'</li></ul> | | Fabric ID 0565 | <ul><li>"What type of fabric is ideal for making playful children's wear with a vibrant speckled pattern?"</li><li>'Which fabric is suitable for crafting garments that can hide wear and minor soiling due to its unique speckled pattern?'</li><li>'What fabric offers good recovery and fit due to elastane content?'</li></ul> | | Fabric ID 0512 | <ul><li>'What is a medium weight textile with a soft drape for creating versatile garments?'</li><li>'What fabric is lightweight and breathable, perfect for making soft summer blouses?'</li><li>'Which fabric is suitable for making soft and comfortable shirts and blouses with a consistent light blue hue?'</li></ul> | | Fabric ID 0876 | <ul><li>'What type of fabric is suitable for apparel that requires both form and function?'</li><li>'Best fabric for creating statement pieces with a pop of color using a twill weave texture?'</li><li>'Which fabric has a slightly textured surface with medium fineness threads, ideal for structured garments?'</li></ul> | | Fabric ID 0856 | <ul><li>'What fabric would be best for making pants that maintain their shape while offering flexibility?'</li><li>'What fabric blend offers both comfort and durability for creating long-lasting clothing?'</li><li>'Which fabric is known for its simple yet durable qualities with no unique finishes?'</li></ul> | | Fabric ID 0608 | <ul><li>'What type of fabric is recommended for creating breathable and comfortable clothing for warm weather?'</li><li>'What fabric would be suitable for making lightweight sweaters with a ribbed texture and soft hand?'</li><li>'What type of fabric is best for making form-fitting t-shirts with a fluid drape?'</li></ul> | | Fabric ID 0573 | <ul><li>'What fabric blend offers durability and slight stretchability for structured yet comfortable dresses?'</li><li>'What fabric is durable yet versatile for various garment constructions?'</li><li>'What type of cloth is versatile for various seasons due to its weight and composition?'</li></ul> | | Fabric ID 0880 | <ul><li>'Need medium weight cotton fabric for creating casual shirts with a balanced color scheme?'</li><li>'Looking for plain weave cotton fabric with a fine thread count and even color distribution?'</li><li>'Which textile is versatile for various seasons like spring and summer due to its lightness?'</li></ul> | | Fabric ID 0450 | <ul><li>'Looking for a fabric for casual apparel applications in mild to warm climates with consistent dyeing?'</li><li>'Which fabric blend is recommended for creating apparel with both breathability and a gentle flow?'</li><li>'Where can I purchase a bamboo-spandex blend fabric suitable for all-season clothing with moisture-wicking properties?'</li></ul> | | Fabric ID 0459 | <ul><li>'What type of fabric is ideal for creating form-fitting tops with a fluid drape?'</li><li>'What fabric composition combines bamboo and Pret fibers for eco-friendly benefits?'</li><li>'What fabric can I use to make elegant and comfortable cardigans with stretch properties?'</li></ul> | | Fabric ID 0564 | <ul><li>'What fabric is recommended for making lightweight garments with a smooth flow and gentle folds?'</li><li>'What type of knit fabric is ideal for creating dresses with moderate stretchability?'</li><li>'What textile composition includes elastane and bamboo for stretchability and comfort in casual apparel?'</li></ul> | | Fabric ID 0731 | <ul><li>'What is the ideal textile for crafting activewear with moderate weight and stretch?'</li><li>'Where can I find a jersey knit textile with a soft texture and fine fibers for casual wear?'</li><li>'What is the recommended material for making activewear that allows for ease of movement?'</li></ul> | | Fabric ID 0578_1 | <ul><li>'What is the recommended fabric for creating spring and summer wear with a focus on breathability?'</li><li>'Which textile is recommended for creating blouses, skirts, and other apparel due to its natural sheen and uniform texture?'</li><li>'What type of fabric has a consistent coloration and high level of saturation for apparel applications?'</li></ul> | | Fabric ID 0855 | <ul><li>'Which fabric has a plain weave construction and a fine thread count for a smooth texture?'</li><li>'What fabric is durable and versatile for everyday wear?'</li><li>'What fabric can be used to make form-fitting clothing like dresses, thanks to its stretchability?'</li></ul> | | Fabric ID 0772 | <ul><li>'What fabric can I use to make casual dresses with a smooth texture and a lightweight feel?'</li><li>'What is a fabric with a tight structure and smooth drape ideal for making casual summer outfits?'</li><li>'What type of fabric is lightweight, breathable, and suitable for layering in variable climates?'</li></ul> | | Fabric ID 0606 | <ul><li>'What fabric is a periwinkle blue color with medium saturation and no visible defects?'</li><li>'What fabric has a soft and smooth texture with fine threads and a knit pattern?'</li><li>'Searching for a fabric that is durable, breathable, and suitable for people with sensitive skin, any options?'</li></ul> | | Fabric ID 0596 | <ul><li>'Are there any fabrics with a simple weave pattern that offer stretchability for semi-fitted garments?'</li><li>'What is the best fabric for creating garments with a good balance of structure and elasticity?'</li><li>'What fabric is suitable for creating garments that require good stretchability and resilience?'</li></ul> | | Fabric ID 0458 | <ul><li>'What type of fabric is commonly used in casual wear, loungewear, and active wear due to its durability and performance?'</li><li>'What type of fabric is suitable for creating comfortable loungewear and lightweight sweaters with a fine, smooth texture and good fabric care?'</li><li>'What is the best fabric for making active wear that offers breathability and performance?'</li></ul> | | Fabric ID 0523_1 | <ul><li>'What material provides a fluid drape and enough structure for t-shirts and lounge pants?'</li><li>'What fabric should I choose for producing clothing with good colorfastness and ease of care in a polyester composition?'</li><li>'What is the best material for creating casual dresses with a medium weight drape and a mix of darker and lighter grey tones?'</li></ul> | | Fabric ID 0730 | <ul><li>'What is the best fabric for making comfortable and stretchy t-shirts with a casual aesthetic?'</li><li>'Where can I buy a knit fabric that is versatile in styling and functional qualities for a range of clothing?'</li><li>'What type of fabric is durable and suitable for everyday wear with a casual aesthetic?'</li></ul> | | Fabric ID 0449 | <ul><li>'Which fabric contains bamboo and Spandex for creating comfortable casual dresses?'</li><li>'What fabric has a fluid drape and slight elasticity, suitable for summer dresses?'</li><li>'What is the recommended fabric for creating draped garments like dresses or tunics?'</li></ul> | | Fabric ID 0724 | <ul><li>'Which fabric is ideal for creating lightweight sweaters with a comfortable and breathable feel?'</li><li>'What type of fabric is ideal for making casual t-shirts with a vibrant striped pattern?'</li><li>'What is the recommended textile for making versatile garments that can be layered in cooler climates?'</li></ul> | | Fabric ID 0734 | <ul><li>'What knit fabric is versatile for use in various seasons and holds its shape well?'</li><li>'What type of fabric is recommended for creating casual tops with a gentle, soft drape?'</li><li>'What fabric is suitable for making lightweight and comfortable casual tops for everyday wear?'</li></ul> | | Fabric ID 0615 | <ul><li>'What fabric would be recommended for making moisture-wicking blouses suitable for warm climates?'</li><li>'What fabric would be apt for creating garments that require a fine, even weave structure?'</li><li>'What is a suitable fabric for creating drapery in light jackets with a slight sheen?'</li></ul> | | Fabric ID 0869 | <ul><li>'What type of cotton fabric is ideal for making shirts and blouses with a soft drape?'</li><li>'What textile has a slightly textured surface with a fine yet distinct weave?'</li><li>'Which cotton fabric is versatile and suitable for both menswear and womenswear?'</li></ul> | | Fabric ID 0864 | <ul><li>'Which fabric is breathable and soft to the touch, suitable for creating comfortable dresses?'</li><li>'Which fabric is recommended for making year-round garments with high color saturation?'</li><li>'What fabric can be used for making shirts, pants, and dresses that require a smooth drape and a hint of elasticity?'</li></ul> | | Fabric ID 0616 | <ul><li>'Which fabric is ideal for creating spring and summer collections with a soft touch and lightweight feel?'</li><li>'What textile is known for its easy care and durability in garment construction?'</li><li>'What type of fabric is best suited for creating blouses with a flowing drape and smooth texture?'</li></ul> | | Fabric ID 0866 | <ul><li>'Which fabric is durable, resilient, and has a slight give due to the Spandex content?'</li><li>'What fabric has a consistent charcoal gray hue with a matte finish and a twill weave pattern?'</li><li>'What fabric is recommended for making form-fitting jackets that are both durable and breathable?'</li></ul> | | Fabric ID 0601 | <ul><li>'What fabric would be suitable for creating draped skirts with a smooth surface and stretchability?'</li><li>'What is the best textile for creating draped skirts with a subtle iridescence?'</li><li>'Searching for a fabric with a smooth texture and slight shimmer effect for draped skirts?'</li></ul> | | Fabric ID 0618 | <ul><li>'What fabric has a soft drape and gentle folds, making it perfect for creating flowy and comfortable spring and summer dresses?'</li><li>'What type of knit fabric offers good resistance to wrinkles and shrinkage for practical everyday wear?'</li><li>'Searching for a polyester knit fabric with a consistent hue and saturation for making versatile and adaptable garments.'</li></ul> | | Fabric ID 0773 | <ul><li>'Which fabric is versatile and suitable for creating durable garments for everyday wear?'</li><li>'What fabric is suitable for making casual wear like t-shirts, dresses, and tops?'</li><li>'What fabric is known for its stable weave with a small percentage of elastane for comfort and durability?'</li></ul> | ## Evaluation ### Metrics | Label | Accuracy | |:--------|:---------| | **all** | 0.3837 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("Jazielinho/fabric_model_1") # Run inference preds = model("What fabric has a comfortable feel and is suitable for people with sensitive skin?") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:--------|:----| | Word count | 7 | 15.4858 | 30 | | Label | Training Sample Count | |:-----------------|:----------------------| | Fabric ID 0447 | 39 | | Fabric ID 0448 | 40 | | Fabric ID 0449 | 41 | | Fabric ID 0450 | 32 | | Fabric ID 0453 | 37 | | Fabric ID 0455 | 33 | | Fabric ID 0456 | 36 | | Fabric ID 0458 | 40 | | Fabric ID 0459 | 30 | | Fabric ID 0462 | 36 | | Fabric ID 0462_1 | 42 | | Fabric ID 0512 | 38 | | Fabric ID 0513 | 39 | | Fabric ID 0522 | 43 | | Fabric ID 0523_1 | 41 | | Fabric ID 0526_1 | 41 | | Fabric ID 0527_1 | 35 | | Fabric ID 0528_1 | 42 | | Fabric ID 0564 | 40 | | Fabric ID 0565 | 43 | | Fabric ID 0571 | 44 | | Fabric ID 0573 | 36 | | Fabric ID 0573_1 | 37 | | Fabric ID 0575 | 40 | | Fabric ID 0575_1 | 44 | | Fabric ID 0576_1 | 42 | | Fabric ID 0578 | 41 | | Fabric ID 0578_1 | 38 | | Fabric ID 0579 | 41 | | Fabric ID 0579_1 | 46 | | Fabric ID 0596 | 41 | | Fabric ID 0598 | 38 | | Fabric ID 0600 | 40 | | Fabric ID 0601 | 39 | | Fabric ID 0606 | 41 | | Fabric ID 0608 | 44 | | Fabric ID 0612 | 45 | | Fabric ID 0613 | 40 | | Fabric ID 0614 | 37 | | Fabric ID 0615 | 44 | | Fabric ID 0616 | 39 | | Fabric ID 0618 | 42 | | Fabric ID 0645 | 36 | | Fabric ID 0719_1 | 43 | | Fabric ID 0722 | 42 | | Fabric ID 0723 | 37 | | Fabric ID 0724 | 41 | | Fabric ID 0725 | 44 | | Fabric ID 0730 | 36 | | Fabric ID 0731 | 40 | | Fabric ID 0733 | 43 | | Fabric ID 0734 | 44 | | Fabric ID 0735 | 39 | | Fabric ID 0736 | 38 | | Fabric ID 0742 | 38 | | Fabric ID 0745 | 43 | | Fabric ID 0748 | 41 | | Fabric ID 0756 | 44 | | Fabric ID 0768 | 40 | | Fabric ID 0769 | 41 | | Fabric ID 0770 | 35 | | Fabric ID 0772 | 43 | | Fabric ID 0773 | 41 | | Fabric ID 0855 | 43 | | Fabric ID 0856 | 37 | | Fabric ID 0859 | 41 | | Fabric ID 0862 | 36 | | Fabric ID 0863 | 38 | | Fabric ID 0864 | 42 | | Fabric ID 0866 | 41 | | Fabric ID 0869 | 39 | | Fabric ID 0873 | 43 | | Fabric ID 0874 | 34 | | Fabric ID 0876 | 40 | | Fabric ID 0880 | 41 | ### Training Hyperparameters - batch_size: (256, 256) - num_epochs: (20, 20) - max_steps: -1 - sampling_strategy: undersampling - body_learning_rate: (2e-05, 1e-05) - head_learning_rate: 0.01 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: True ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:-------:|:--------:|:-------------:|:---------------:| | 0.0021 | 1 | 0.2732 | - | | 0.1040 | 50 | 0.2348 | - | | 0.2079 | 100 | 0.2277 | - | | 0.3119 | 150 | 0.2419 | - | | 0.4158 | 200 | 0.2401 | - | | 0.5198 | 250 | 0.2367 | - | | 0.6237 | 300 | 0.237 | - | | 0.7277 | 350 | 0.2372 | - | | 0.8316 | 400 | 0.2283 | - | | 0.9356 | 450 | 0.223 | - | | 1.0 | 481 | - | 0.207 | | 1.0395 | 500 | 0.2075 | - | | 1.1435 | 550 | 0.2162 | - | | 1.2474 | 600 | 0.1984 | - | | 1.3514 | 650 | 0.2173 | - | | 1.4553 | 700 | 0.2154 | - | | 1.5593 | 750 | 0.1912 | - | | 1.6632 | 800 | 0.2014 | - | | 1.7672 | 850 | 0.1866 | - | | 1.8711 | 900 | 0.1933 | - | | 1.9751 | 950 | 0.1821 | - | | 2.0 | 962 | - | 0.1863 | | 2.0790 | 1000 | 0.1607 | - | | 2.1830 | 1050 | 0.1544 | - | | 2.2869 | 1100 | 0.1624 | - | | 2.3909 | 1150 | 0.1586 | - | | 2.4948 | 1200 | 0.1445 | - | | 2.5988 | 1250 | 0.1662 | - | | 2.7027 | 1300 | 0.1515 | - | | 2.8067 | 1350 | 0.158 | - | | 2.9106 | 1400 | 0.1316 | - | | **3.0** | **1443** | **-** | **0.1824** | | 3.0146 | 1450 | 0.138 | - | | 3.1185 | 1500 | 0.1414 | - | | 3.2225 | 1550 | 0.1249 | - | | 3.3264 | 1600 | 0.1336 | - | | 3.4304 | 1650 | 0.1249 | - | | 3.5343 | 1700 | 0.1308 | - | | 3.6383 | 1750 | 0.1088 | - | | 3.7422 | 1800 | 0.122 | - | | 3.8462 | 1850 | 0.1029 | - | | 3.9501 | 1900 | 0.1065 | - | | 4.0 | 1924 | - | 0.1836 | | 4.0541 | 1950 | 0.1133 | - | | 4.1580 | 2000 | 0.1102 | - | | 4.2620 | 2050 | 0.1209 | - | | 4.3659 | 2100 | 0.1054 | - | | 4.4699 | 2150 | 0.0874 | - | | 4.5738 | 2200 | 0.0896 | - | | 4.6778 | 2250 | 0.1104 | - | | 4.7817 | 2300 | 0.0912 | - | | 4.8857 | 2350 | 0.0766 | - | | 4.9896 | 2400 | 0.0778 | - | | 5.0 | 2405 | - | 0.1952 | | 5.0936 | 2450 | 0.114 | - | | 5.1975 | 2500 | 0.0869 | - | | 5.3015 | 2550 | 0.0912 | - | | 5.4054 | 2600 | 0.103 | - | | 5.5094 | 2650 | 0.0748 | - | | 5.6133 | 2700 | 0.0911 | - | | 5.7173 | 2750 | 0.0721 | - | | 5.8212 | 2800 | 0.0964 | - | | 5.9252 | 2850 | 0.0712 | - | | 6.0 | 2886 | - | 0.1938 | | 6.0291 | 2900 | 0.0831 | - | | 6.1331 | 2950 | 0.0924 | - | | 6.2370 | 3000 | 0.0862 | - | | 6.3410 | 3050 | 0.0725 | - | | 6.4449 | 3100 | 0.0828 | - | | 6.5489 | 3150 | 0.0645 | - | | 6.6528 | 3200 | 0.0741 | - | | 6.7568 | 3250 | 0.0589 | - | | 6.8607 | 3300 | 0.075 | - | | 6.9647 | 3350 | 0.075 | - | | 7.0 | 3367 | - | 0.2016 | | 7.0686 | 3400 | 0.0893 | - | | 7.1726 | 3450 | 0.0727 | - | | 7.2765 | 3500 | 0.0669 | - | | 7.3805 | 3550 | 0.0702 | - | | 7.4844 | 3600 | 0.0636 | - | | 7.5884 | 3650 | 0.0605 | - | | 7.6923 | 3700 | 0.0707 | - | | 7.7963 | 3750 | 0.0597 | - | | 7.9002 | 3800 | 0.0577 | - | | 8.0 | 3848 | - | 0.2067 | | 8.0042 | 3850 | 0.0515 | - | | 8.1081 | 3900 | 0.0686 | - | | 8.2121 | 3950 | 0.0587 | - | | 8.3160 | 4000 | 0.057 | - | | 8.4200 | 4050 | 0.0693 | - | | 8.5239 | 4100 | 0.0812 | - | | 8.6279 | 4150 | 0.0592 | - | | 8.7318 | 4200 | 0.07 | - | | 8.8358 | 4250 | 0.064 | - | | 8.9397 | 4300 | 0.0503 | - | | 9.0 | 4329 | - | 0.2122 | | 9.0437 | 4350 | 0.0489 | - | | 9.1476 | 4400 | 0.0602 | - | | 9.2516 | 4450 | 0.0673 | - | | 9.3555 | 4500 | 0.0665 | - | | 9.4595 | 4550 | 0.0672 | - | | 9.5634 | 4600 | 0.07 | - | | 9.6674 | 4650 | 0.042 | - | | 9.7713 | 4700 | 0.0656 | - | | 9.8753 | 4750 | 0.0557 | - | | 9.9792 | 4800 | 0.0648 | - | | 10.0 | 4810 | - | 0.215 | | 10.0832 | 4850 | 0.0455 | - | | 10.1871 | 4900 | 0.0668 | - | | 10.2911 | 4950 | 0.0453 | - | | 10.3950 | 5000 | 0.0555 | - | | 10.4990 | 5050 | 0.0679 | - | | 10.6029 | 5100 | 0.0516 | - | | 10.7069 | 5150 | 0.0448 | - | | 10.8108 | 5200 | 0.0458 | - | | 10.9148 | 5250 | 0.0544 | - | | 11.0 | 5291 | - | 0.2172 | | 11.0187 | 5300 | 0.0453 | - | | 11.1227 | 5350 | 0.0442 | - | | 11.2266 | 5400 | 0.0396 | - | | 11.3306 | 5450 | 0.0507 | - | | 11.4345 | 5500 | 0.0515 | - | | 11.5385 | 5550 | 0.0503 | - | | 11.6424 | 5600 | 0.0521 | - | | 11.7464 | 5650 | 0.0551 | - | | 11.8503 | 5700 | 0.0572 | - | | 11.9543 | 5750 | 0.0604 | - | | 12.0 | 5772 | - | 0.2245 | | 12.0582 | 5800 | 0.0445 | - | | 12.1622 | 5850 | 0.0564 | - | | 12.2661 | 5900 | 0.0449 | - | | 12.3701 | 5950 | 0.0502 | - | | 12.4740 | 6000 | 0.0517 | - | | 12.5780 | 6050 | 0.0426 | - | | 12.6819 | 6100 | 0.0386 | - | | 12.7859 | 6150 | 0.0446 | - | | 12.8898 | 6200 | 0.0574 | - | | 12.9938 | 6250 | 0.0546 | - | | 13.0 | 6253 | - | 0.223 | | 13.0977 | 6300 | 0.0381 | - | | 13.2017 | 6350 | 0.047 | - | | 13.3056 | 6400 | 0.0425 | - | | 13.4096 | 6450 | 0.0445 | - | | 13.5135 | 6500 | 0.056 | - | | 13.6175 | 6550 | 0.0533 | - | | 13.7214 | 6600 | 0.0466 | - | | 13.8254 | 6650 | 0.0506 | - | | 13.9293 | 6700 | 0.0402 | - | | 14.0 | 6734 | - | 0.2238 | | 14.0333 | 6750 | 0.0375 | - | | 14.1372 | 6800 | 0.0447 | - | | 14.2412 | 6850 | 0.0584 | - | | 14.3451 | 6900 | 0.0348 | - | | 14.4491 | 6950 | 0.0459 | - | | 14.5530 | 7000 | 0.0465 | - | | 14.6570 | 7050 | 0.0421 | - | | 14.7609 | 7100 | 0.0537 | - | | 14.8649 | 7150 | 0.041 | - | | 14.9688 | 7200 | 0.0281 | - | | 15.0 | 7215 | - | 0.2247 | | 15.0728 | 7250 | 0.0431 | - | | 15.1767 | 7300 | 0.039 | - | | 15.2807 | 7350 | 0.0408 | - | | 15.3846 | 7400 | 0.048 | - | | 15.4886 | 7450 | 0.0354 | - | | 15.5925 | 7500 | 0.0626 | - | | 15.6965 | 7550 | 0.0396 | - | | 15.8004 | 7600 | 0.045 | - | | 15.9044 | 7650 | 0.0432 | - | | 16.0 | 7696 | - | 0.2246 | | 16.0083 | 7700 | 0.0385 | - | | 16.1123 | 7750 | 0.0368 | - | | 16.2162 | 7800 | 0.0628 | - | | 16.3202 | 7850 | 0.035 | - | | 16.4241 | 7900 | 0.0264 | - | | 16.5281 | 7950 | 0.0275 | - | | 16.6320 | 8000 | 0.0383 | - | | 16.7360 | 8050 | 0.0469 | - | | 16.8399 | 8100 | 0.0445 | - | | 16.9439 | 8150 | 0.0357 | - | | 17.0 | 8177 | - | 0.2268 | | 17.0478 | 8200 | 0.0456 | - | | 17.1518 | 8250 | 0.053 | - | | 17.2557 | 8300 | 0.0498 | - | | 17.3597 | 8350 | 0.0368 | - | | 17.4636 | 8400 | 0.0473 | - | | 17.5676 | 8450 | 0.0422 | - | | 17.6715 | 8500 | 0.0362 | - | | 17.7755 | 8550 | 0.0292 | - | | 17.8794 | 8600 | 0.0431 | - | | 17.9834 | 8650 | 0.0412 | - | | 18.0 | 8658 | - | 0.2276 | | 18.0873 | 8700 | 0.0655 | - | | 18.1913 | 8750 | 0.0405 | - | | 18.2952 | 8800 | 0.0455 | - | | 18.3992 | 8850 | 0.0324 | - | | 18.5031 | 8900 | 0.038 | - | | 18.6071 | 8950 | 0.0315 | - | | 18.7110 | 9000 | 0.0468 | - | | 18.8150 | 9050 | 0.0451 | - | | 18.9189 | 9100 | 0.032 | - | | 19.0 | 9139 | - | 0.2268 | | 19.0229 | 9150 | 0.0371 | - | | 19.1268 | 9200 | 0.0439 | - | | 19.2308 | 9250 | 0.0472 | - | | 19.3347 | 9300 | 0.0362 | - | | 19.4387 | 9350 | 0.0341 | - | | 19.5426 | 9400 | 0.036 | - | | 19.6466 | 9450 | 0.0382 | - | | 19.7505 | 9500 | 0.0288 | - | | 19.8545 | 9550 | 0.04 | - | | 19.9584 | 9600 | 0.0277 | - | | 20.0 | 9620 | - | 0.2277 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - SetFit: 1.0.3 - Sentence Transformers: 2.7.0 - Transformers: 4.40.1 - PyTorch: 2.2.1+cu121 - Datasets: 2.19.0 - Tokenizers: 0.19.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"library_name": "setfit", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "sentence-transformers/paraphrase-MiniLM-L6-v2", "widget": [{"text": "What fabric has a comfortable feel and is suitable for people with sensitive skin?"}, {"text": "What is the most recommended fabric for making outerwear that requires a blend of comfort and resilience?"}, {"text": "What fabric has a fluid drape and is ideal for creating lightweight summer dresses?"}, {"text": "Which fabric is best for creating versatile clothing items like casual shirts, blouses, and dresses in a periwinkle blue hue?"}, {"text": "What kind of fabric is suitable for making form-fitting activewear like yoga pants and t-shirts?"}], "pipeline_tag": "text-classification", "inference": true, "model-index": [{"name": "SetFit with sentence-transformers/paraphrase-MiniLM-L6-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.3836898395721925, "name": "Accuracy"}]}]}]}
Jazielinho/fabric_model_1
null
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-MiniLM-L6-v2", "model-index", "region:us" ]
null
2024-05-01T11:24:33+00:00
[ "2209.11055" ]
[]
TAGS #setfit #safetensors #bert #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/paraphrase-MiniLM-L6-v2 #model-index #region-us
SetFit with sentence-transformers/paraphrase-MiniLM-L6-v2 ========================================================= This is a SetFit model that can be used for Text Classification. This SetFit model uses sentence-transformers/paraphrase-MiniLM-L6-v2 as the Sentence Transformer embedding model. A LogisticRegression instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a Sentence Transformer with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. Model Details ------------- ### Model Description * Model Type: SetFit * Sentence Transformer body: sentence-transformers/paraphrase-MiniLM-L6-v2 * Classification head: a LogisticRegression instance * Maximum Sequence Length: 128 tokens * Number of Classes: 75 classes ### Model Sources * Repository: SetFit on GitHub * Paper: Efficient Few-Shot Learning Without Prompts * Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts ### Model Labels Evaluation ---------- ### Metrics Uses ---- ### Direct Use for Inference First install the SetFit library: Then you can load this model and run inference. Training Details ---------------- ### Training Set Metrics ### Training Hyperparameters * batch\_size: (256, 256) * num\_epochs: (20, 20) * max\_steps: -1 * sampling\_strategy: undersampling * body\_learning\_rate: (2e-05, 1e-05) * head\_learning\_rate: 0.01 * loss: CosineSimilarityLoss * distance\_metric: cosine\_distance * margin: 0.25 * end\_to\_end: False * use\_amp: False * warmup\_proportion: 0.1 * seed: 42 * eval\_max\_steps: -1 * load\_best\_model\_at\_end: True ### Training Results * The bold row denotes the saved checkpoint. ### Framework Versions * Python: 3.10.12 * SetFit: 1.0.3 * Sentence Transformers: 2.7.0 * Transformers: 4.40.1 * PyTorch: 2.2.1+cu121 * Datasets: 2.19.0 * Tokenizers: 0.19.1 ### BibTeX
[ "### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/paraphrase-MiniLM-L6-v2\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 128 tokens\n* Number of Classes: 75 classes", "### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts", "### Model Labels\n\n\n\nEvaluation\n----------", "### Metrics\n\n\n\nUses\n----", "### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------", "### Training Set Metrics", "### Training Hyperparameters\n\n\n* batch\\_size: (256, 256)\n* num\\_epochs: (20, 20)\n* max\\_steps: -1\n* sampling\\_strategy: undersampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: True", "### Training Results\n\n\n\n* The bold row denotes the saved checkpoint.", "### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1", "### BibTeX" ]
[ "TAGS\n#setfit #safetensors #bert #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/paraphrase-MiniLM-L6-v2 #model-index #region-us \n", "### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/paraphrase-MiniLM-L6-v2\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 128 tokens\n* Number of Classes: 75 classes", "### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts", "### Model Labels\n\n\n\nEvaluation\n----------", "### Metrics\n\n\n\nUses\n----", "### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------", "### Training Set Metrics", "### Training Hyperparameters\n\n\n* batch\\_size: (256, 256)\n* num\\_epochs: (20, 20)\n* max\\_steps: -1\n* sampling\\_strategy: undersampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: True", "### Training Results\n\n\n\n* The bold row denotes the saved checkpoint.", "### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1", "### BibTeX" ]
[ 68, 60, 42, 16, 10, 43, 7, 169, 14, 75, 6 ]
[ "TAGS\n#setfit #safetensors #bert #sentence-transformers #text-classification #generated_from_setfit_trainer #arxiv-2209.11055 #base_model-sentence-transformers/paraphrase-MiniLM-L6-v2 #model-index #region-us \n### Model Description\n\n\n* Model Type: SetFit\n* Sentence Transformer body: sentence-transformers/paraphrase-MiniLM-L6-v2\n* Classification head: a LogisticRegression instance\n* Maximum Sequence Length: 128 tokens\n* Number of Classes: 75 classes### Model Sources\n\n\n* Repository: SetFit on GitHub\n* Paper: Efficient Few-Shot Learning Without Prompts\n* Blogpost: SetFit: Efficient Few-Shot Learning Without Prompts### Model Labels\n\n\n\nEvaluation\n----------### Metrics\n\n\n\nUses\n----### Direct Use for Inference\n\n\nFirst install the SetFit library:\n\n\nThen you can load this model and run inference.\n\n\nTraining Details\n----------------### Training Set Metrics### Training Hyperparameters\n\n\n* batch\\_size: (256, 256)\n* num\\_epochs: (20, 20)\n* max\\_steps: -1\n* sampling\\_strategy: undersampling\n* body\\_learning\\_rate: (2e-05, 1e-05)\n* head\\_learning\\_rate: 0.01\n* loss: CosineSimilarityLoss\n* distance\\_metric: cosine\\_distance\n* margin: 0.25\n* end\\_to\\_end: False\n* use\\_amp: False\n* warmup\\_proportion: 0.1\n* seed: 42\n* eval\\_max\\_steps: -1\n* load\\_best\\_model\\_at\\_end: True### Training Results\n\n\n\n* The bold row denotes the saved checkpoint.### Framework Versions\n\n\n* Python: 3.10.12\n* SetFit: 1.0.3\n* Sentence Transformers: 2.7.0\n* Transformers: 4.40.1\n* PyTorch: 2.2.1+cu121\n* Datasets: 2.19.0\n* Tokenizers: 0.19.1### BibTeX" ]
text-generation
transformers
# Uploaded model - **Developed by:** herisan - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
herisan/llama-3-8b-alpaca-cleaned
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:26:41+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: herisan - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: herisan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: herisan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 76, 79 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: herisan\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3995 - F1: 0.6887 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.023 | 1.0 | 50 | 0.5000 | 0.5805 | | 0.4736 | 2.0 | 100 | 0.4185 | 0.6689 | | 0.3709 | 3.0 | 150 | 0.3995 | 0.6887 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-en", "results": []}]}
u00890358/xlm-roberta-base-finetuned-panx-en
null
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:26:50+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-en ================================== This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.3995 * F1: 0.6887 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ 47, 101, 5, 44 ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # HPY_gpt2_v5 This model is a fine-tuned version of [ClassCat/gpt2-base-french](https://huggingface.co/ClassCat/gpt2-base-french) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5584 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 202 | 1.7781 | | No log | 2.0 | 404 | 1.6314 | | 1.9852 | 3.0 | 606 | 1.5753 | | 1.9852 | 4.0 | 808 | 1.5584 | ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.13.3
{"license": "cc-by-sa-4.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "HPY_gpt2_v5", "results": []}]}
azizkt/HPY_gpt2_v5
null
[ "transformers", "pytorch", "tensorboard", "gpt2", "text-generation", "generated_from_trainer", "license:cc-by-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T11:28:10+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
HPY\_gpt2\_v5 ============= This model is a fine-tuned version of ClassCat/gpt2-base-french on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.5584 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 4 ### Training results ### Framework versions * Transformers 4.30.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.13.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.30.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.30.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3" ]
[ 57, 124, 5, 44 ]
[ "TAGS\n#transformers #pytorch #tensorboard #gpt2 #text-generation #generated_from_trainer #license-cc-by-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4### Training results### Framework versions\n\n\n* Transformers 4.30.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.13.3" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # LongRiver/distilbert-base-cased-finetuned This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.7185 - Train End Logits Accuracy: 0.5917 - Train Start Logits Accuracy: 0.5638 - Validation Loss: 2.0391 - Validation End Logits Accuracy: 0.5252 - Validation Start Logits Accuracy: 0.4886 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 6786, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 2.3543 | 0.5058 | 0.4992 | 2.0820 | 0.5253 | 0.4917 | 0 | | 1.7185 | 0.5917 | 0.5638 | 2.0391 | 0.5252 | 0.4886 | 1 | ### Framework versions - Transformers 4.40.1 - TensorFlow 2.15.0 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilbert-base-cased", "model-index": [{"name": "LongRiver/distilbert-base-cased-finetuned", "results": []}]}
LongRiver/distilbert-base-cased-finetuned
null
[ "transformers", "tf", "tensorboard", "distilbert", "question-answering", "generated_from_keras_callback", "base_model:distilbert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:29:03+00:00
[]
[]
TAGS #transformers #tf #tensorboard #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert-base-cased #license-apache-2.0 #endpoints_compatible #region-us
LongRiver/distilbert-base-cased-finetuned ========================================= This model is a fine-tuned version of distilbert-base-cased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 1.7185 * Train End Logits Accuracy: 0.5917 * Train Start Logits Accuracy: 0.5638 * Validation Loss: 2.0391 * Validation End Logits Accuracy: 0.5252 * Validation Start Logits Accuracy: 0.4886 * Epoch: 1 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': True, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 6786, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.40.1 * TensorFlow 2.15.0 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 6786, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tf #tensorboard #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert-base-cased #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 6786, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 57, 291, 5, 38 ]
[ "TAGS\n#transformers #tf #tensorboard #distilbert #question-answering #generated_from_keras_callback #base_model-distilbert-base-cased #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': True, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 6786, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* TensorFlow 2.15.0\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
token-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
007Rahul/ner_model
null
[ "transformers", "safetensors", "bert", "token-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:30:44+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 37, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #bert #token-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
007Rahul/tokenizer
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:31:36+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 22, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
text-generation
transformers
# Uploaded model - **Developed by:** abdulrehmanibk - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "metrics": ["accuracy"], "base_model": "unsloth/llama-3-8b-bnb-4bit", "pipeline_tag": "text-generation"}
abdulrehmanibk/mpg2_project
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "text-generation", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:33:41+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #text-generation #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: abdulrehmanibk - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: abdulrehmanibk\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #text-generation #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: abdulrehmanibk\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 68, 82 ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #text-generation #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: abdulrehmanibk\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
reinforcement-learning
null
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "CartPole-v1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "500.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
commanderxa/CartPole-v1
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-05-01T11:40:11+00:00
[]
[]
TAGS #CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing CartPole-v1 This is a trained model of a Reinforce agent playing CartPole-v1 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ 32, 46 ]
[ "TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
fxmeng/PiSSA-Llama-3-70B-4bit-r64-1iter
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:41:19+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mistral_instruct_generation This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset. It achieves the following results on the evaluation set: - Loss: 1.7257 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_steps: 0.03 - training_steps: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0986 | 0.1709 | 20 | 1.8805 | | 2.0093 | 0.3419 | 40 | 1.7881 | | 2.0904 | 0.5128 | 60 | 1.7578 | | 1.8353 | 0.6838 | 80 | 1.7418 | | 1.7356 | 0.8547 | 100 | 1.7257 | ### Framework versions - PEFT 0.10.0 - Transformers 4.41.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "mistral_instruct_generation", "results": []}]}
ajinkyabhandare/mistral_instruct_generation
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-05-01T11:41:23+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
mistral\_instruct\_generation ============================= This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the generator dataset. It achieves the following results on the evaluation set: * Loss: 1.7257 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: constant * lr\_scheduler\_warmup\_steps: 0.03 * training\_steps: 100 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.41.0.dev0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 100", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 100", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 62, 117, 5, 55 ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_steps: 0.03\n* training\\_steps: 100### Training results### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.41.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # no_board_history_with_sys_history_v2_10epoch_lr5e-5_batch2 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.2 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "no_board_history_with_sys_history_v2_10epoch_lr5e-5_batch2", "results": []}]}
wenshicheng97/no_board_history_with_sys_history_v2_10epoch_lr5e-5_batch2
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
null
2024-05-01T11:41:48+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us
# no_board_history_with_sys_history_v2_10epoch_lr5e-5_batch2 This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.2 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# no_board_history_with_sys_history_v2_10epoch_lr5e-5_batch2\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 64\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.2\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us \n", "# no_board_history_with_sys_history_v2_10epoch_lr5e-5_batch2\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 64\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.2\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ 52, 62, 7, 9, 9, 4, 113, 5, 48 ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us \n# no_board_history_with_sys_history_v2_10epoch_lr5e-5_batch2\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 64\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10.0### Training results### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.2\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - mrtuandao/dreambooth-tuan-without-prior This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of SKS person using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png) DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true, "instance_prompt": "a photo of SKS person"}
mrtuandao/dreambooth-tuan-without-prior
null
[ "diffusers", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-05-01T11:44:43+00:00
[]
[]
TAGS #diffusers #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# DreamBooth - mrtuandao/dreambooth-tuan-without-prior This is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of SKS person using DreamBooth. You can find some example images in the following. !img_0 !img_1 !img_2 !img_3 DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# DreamBooth - mrtuandao/dreambooth-tuan-without-prior\n\nThis is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of SKS person using DreamBooth.\nYou can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3\n\n\nDreamBooth for the text encoder was enabled: False.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# DreamBooth - mrtuandao/dreambooth-tuan-without-prior\n\nThis is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of SKS person using DreamBooth.\nYou can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3\n\n\nDreamBooth for the text encoder was enabled: False.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ 79, 101, 6, 7, 23, 17 ]
[ "TAGS\n#diffusers #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n# DreamBooth - mrtuandao/dreambooth-tuan-without-prior\n\nThis is a dreambooth model derived from runwayml/stable-diffusion-v1-5. The weights were trained on a photo of SKS person using DreamBooth.\nYou can find some example images in the following. \n\n!img_0\n!img_1\n!img_2\n!img_3\n\n\nDreamBooth for the text encoder was enabled: False.## Intended uses & limitations#### How to use#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]## Training details\n\n[TODO: describe the data used to train the model]" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
baraah/blip2-opt-2.7b-1-5
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:45:01+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 26, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1769 - F1: 0.8516 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2935 | 1.0 | 835 | 0.1943 | 0.8149 | | 0.1554 | 2.0 | 1670 | 0.1648 | 0.8464 | | 0.1014 | 3.0 | 2505 | 0.1769 | 0.8516 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-all", "results": []}]}
u00890358/xlm-roberta-base-finetuned-panx-all
null
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:45:31+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-all =================================== This model is a fine-tuned version of xlm-roberta-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1769 * F1: 0.8516 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ 51, 101, 5, 44 ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-to-image
diffusers
# DreamBooth model for the space concept trained by livewalk on the livewalk/james-webb-telescope dataset. This is a Stable Diffusion model fine-tuned on the space concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of space telescope** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on `telescope` images for the science theme. ## Usage ```python from diffusers import StableDiffusionPipeline pipeline = StableDiffusionPipeline.from_pretrained('livewalk/space-telescope') image = pipeline().images[0] image ```
{"license": "creativeml-openrail-m", "tags": ["pytorch", "diffusers", "stable-diffusion", "text-to-image", "diffusion-models-class", "dreambooth-hackathon", "science"], "widget": [{"text": "a photo of space telescope in the second Sun-Earth Lagrange point (L2)"}]}
livewalk/space-telescope
null
[ "diffusers", "safetensors", "pytorch", "stable-diffusion", "text-to-image", "diffusion-models-class", "dreambooth-hackathon", "science", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-05-01T11:47:18+00:00
[]
[]
TAGS #diffusers #safetensors #pytorch #stable-diffusion #text-to-image #diffusion-models-class #dreambooth-hackathon #science #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# DreamBooth model for the space concept trained by livewalk on the livewalk/james-webb-telescope dataset. This is a Stable Diffusion model fine-tuned on the space concept with DreamBooth. It can be used by modifying the 'instance_prompt': a photo of space telescope This model was created as part of the DreamBooth Hackathon . Visit the organisation page for instructions on how to take part! ## Description This is a Stable Diffusion model fine-tuned on 'telescope' images for the science theme. ## Usage
[ "# DreamBooth model for the space concept trained by livewalk on the livewalk/james-webb-telescope dataset.\n\nThis is a Stable Diffusion model fine-tuned on the space concept with DreamBooth. It can be used by modifying the 'instance_prompt': a photo of space telescope\n\nThis model was created as part of the DreamBooth Hackathon . Visit the organisation page for instructions on how to take part!", "## Description\n\n\nThis is a Stable Diffusion model fine-tuned on 'telescope' images for the science theme.", "## Usage" ]
[ "TAGS\n#diffusers #safetensors #pytorch #stable-diffusion #text-to-image #diffusion-models-class #dreambooth-hackathon #science #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# DreamBooth model for the space concept trained by livewalk on the livewalk/james-webb-telescope dataset.\n\nThis is a Stable Diffusion model fine-tuned on the space concept with DreamBooth. It can be used by modifying the 'instance_prompt': a photo of space telescope\n\nThis model was created as part of the DreamBooth Hackathon . Visit the organisation page for instructions on how to take part!", "## Description\n\n\nThis is a Stable Diffusion model fine-tuned on 'telescope' images for the science theme.", "## Usage" ]
[ 68, 89, 22, 3 ]
[ "TAGS\n#diffusers #safetensors #pytorch #stable-diffusion #text-to-image #diffusion-models-class #dreambooth-hackathon #science #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n# DreamBooth model for the space concept trained by livewalk on the livewalk/james-webb-telescope dataset.\n\nThis is a Stable Diffusion model fine-tuned on the space concept with DreamBooth. It can be used by modifying the 'instance_prompt': a photo of space telescope\n\nThis model was created as part of the DreamBooth Hackathon . Visit the organisation page for instructions on how to take part!## Description\n\n\nThis is a Stable Diffusion model fine-tuned on 'telescope' images for the science theme.## Usage" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-base-patch4-window7-224-finetuned-ind-17-imbalanced-aadhaarmask This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3209 - Accuracy: 0.8557 - Recall: 0.8557 - F1: 0.8542 - Precision: 0.8560 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.5155 | 0.9974 | 293 | 0.5710 | 0.7935 | 0.7935 | 0.7821 | 0.7895 | | 0.4245 | 1.9983 | 587 | 0.4729 | 0.8238 | 0.8238 | 0.8187 | 0.8266 | | 0.4183 | 2.9991 | 881 | 0.4145 | 0.8408 | 0.8408 | 0.8309 | 0.8350 | | 0.4088 | 4.0 | 1175 | 0.3901 | 0.8425 | 0.8425 | 0.8375 | 0.8501 | | 0.3489 | 4.9974 | 1468 | 0.3703 | 0.8463 | 0.8463 | 0.8446 | 0.8518 | | 0.3115 | 5.9983 | 1762 | 0.3500 | 0.8540 | 0.8540 | 0.8525 | 0.8605 | | 0.3087 | 6.9991 | 2056 | 0.3338 | 0.8519 | 0.8519 | 0.8494 | 0.8582 | | 0.2372 | 8.0 | 2350 | 0.3181 | 0.8548 | 0.8548 | 0.8543 | 0.8587 | | 0.2816 | 8.9974 | 2643 | 0.3167 | 0.8536 | 0.8536 | 0.8530 | 0.8561 | | 0.2378 | 9.9745 | 2930 | 0.3063 | 0.8702 | 0.8702 | 0.8686 | 0.8709 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0a0+81ea7a4 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy", "recall", "f1", "precision"], "base_model": "microsoft/swin-base-patch4-window7-224", "model-index": [{"name": "swin-base-patch4-window7-224-finetuned-ind-17-imbalanced-aadhaarmask", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.855683269476373, "name": "Accuracy"}, {"type": "recall", "value": 0.855683269476373, "name": "Recall"}, {"type": "f1", "value": 0.8542203503644927, "name": "F1"}, {"type": "precision", "value": 0.8559779206156822, "name": "Precision"}]}]}]}
Kushagra07/swin-base-patch4-window7-224-finetuned-ind-17-imbalanced-aadhaarmask
null
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-base-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:48:34+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-base-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
swin-base-patch4-window7-224-finetuned-ind-17-imbalanced-aadhaarmask ==================================================================== This model is a fine-tuned version of microsoft/swin-base-patch4-window7-224 on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 0.3209 * Accuracy: 0.8557 * Recall: 0.8557 * F1: 0.8542 * Precision: 0.8560 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.0a0+81ea7a4 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0a0+81ea7a4\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-base-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0a0+81ea7a4\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 75, 142, 5, 48 ]
[ "TAGS\n#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-base-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0a0+81ea7a4\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
vijayvarmak/gemma-FT-Gemini-Full
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T11:48:54+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 46, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sea-lion-7b-text-to-sql This model is a fine-tuned version of [aisingapore/sea-lion-7b-instruct](https://huggingface.co/aisingapore/sea-lion-7b-instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.2.1 - Datasets 2.16.1 - Tokenizers 0.15.2
{"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "aisingapore/sea-lion-7b-instruct", "model-index": [{"name": "sea-lion-7b-text-to-sql", "results": []}]}
Phuree/sea-lion-7b-text-to-sql
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:aisingapore/sea-lion-7b-instruct", "license:mit", "region:us" ]
null
2024-05-01T11:49:37+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-aisingapore/sea-lion-7b-instruct #license-mit #region-us
# sea-lion-7b-text-to-sql This model is a fine-tuned version of aisingapore/sea-lion-7b-instruct on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 10 ### Training results ### Framework versions - PEFT 0.7.2.dev0 - Transformers 4.36.2 - Pytorch 2.2.1 - Datasets 2.16.1 - Tokenizers 0.15.2
[ "# sea-lion-7b-text-to-sql\n\nThis model is a fine-tuned version of aisingapore/sea-lion-7b-instruct on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 10", "### Training results", "### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.2.1\n- Datasets 2.16.1\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-aisingapore/sea-lion-7b-instruct #license-mit #region-us \n", "# sea-lion-7b-text-to-sql\n\nThis model is a fine-tuned version of aisingapore/sea-lion-7b-instruct on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 10", "### Training results", "### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.2.1\n- Datasets 2.16.1\n- Tokenizers 0.15.2" ]
[ 52, 42, 7, 9, 9, 4, 126, 5, 51 ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-aisingapore/sea-lion-7b-instruct #license-mit #region-us \n# sea-lion-7b-text-to-sql\n\nThis model is a fine-tuned version of aisingapore/sea-lion-7b-instruct on the generator dataset.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 4\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 10### Training results### Framework versions\n\n- PEFT 0.7.2.dev0\n- Transformers 4.36.2\n- Pytorch 2.2.1\n- Datasets 2.16.1\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral 7B NL2BASH Agent This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the nl2bash dataset. It achieves the following results on the evaluation set: - Loss: 1.5952 ## Model description Mistral 7B NL2BASH Agent is a fine-tuned model that converts natural language queries into Linux commands. It serves as an intelligent agent capable of generating Linux commands based on user input in the form of natural language queries. ## Intended uses & limitations - Automating the process of creating Linux commands from natural language queries. - Assisting users in generating complex Linux commands quickly and accurately. - The model's performance may vary based on the complexity and specificity of the natural language queries. - It may not handle all edge cases or uncommon scenarios effectively. ## Installation ```bash pip install transformers accelerate torch bitsandbytes peft ``` ## Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig import torch from peft import PeftModel, PeftConfig read_token="YOUR HUGGINGFACE TOKEN" nf4_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True, bnb_4bit_compute_dtype=torch.bfloat16 ) model = AutoModelForCausalLM.from_pretrained( "mistralai/Mistral-7B-Instruct-v0.2", device_map='auto', quantization_config=nf4_config, use_cache=False, token=read_token ) model = PeftModel.from_pretrained(model, "pranay-j/mistral-7b-nl2bash-agent",device_map='auto',token=read_token) tokenizer=AutoTokenizer.from_pretrained("pranay-j/mistral-7b-nl2bash-agent",add_eos_token=False) nl='Add "execute" to the permissions of all directories in the home directory tree' prompt= f"[INST] {nl} [/INST]" inputs=tokenizer(prompt,return_tensors="pt") input_ids=inputs["input_ids"].to("cuda") with torch.no_grad(): out=model.generate(input_ids,top_p=0.5, temperature=0.7, max_new_tokens=30) tokenizer.decode(out[0][input_ids.shape[-1]:]) # Output: find ~ -type d -exec chmod +x {} </s> ``` ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 40 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 5 - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6136 | 1.0 | 202 | 1.6451 | | 1.5448 | 2.0 | 404 | 1.5952 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["en"], "license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "datasets": ["jiacheng-ye/nl2bash"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "Mistral 7B NL2BASH Agent", "results": []}]}
pranay-j/mistral-7b-nl2bash-agent
null
[ "peft", "safetensors", "generated_from_trainer", "en", "dataset:jiacheng-ye/nl2bash", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-05-01T11:50:23+00:00
[]
[ "en" ]
TAGS #peft #safetensors #generated_from_trainer #en #dataset-jiacheng-ye/nl2bash #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
Mistral 7B NL2BASH Agent ======================== This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the nl2bash dataset. It achieves the following results on the evaluation set: * Loss: 1.5952 Model description ----------------- Mistral 7B NL2BASH Agent is a fine-tuned model that converts natural language queries into Linux commands. It serves as an intelligent agent capable of generating Linux commands based on user input in the form of natural language queries. Intended uses & limitations --------------------------- * Automating the process of creating Linux commands from natural language queries. * Assisting users in generating complex Linux commands quickly and accurately. * The model's performance may vary based on the complexity and specificity of the natural language queries. * It may not handle all edge cases or uncommon scenarios effectively. Installation ------------ Usage ----- Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2.5e-05 * train\_batch\_size: 10 * eval\_batch\_size: 10 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 40 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 5 * num\_epochs: 2 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2.5e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 40\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #en #dataset-jiacheng-ye/nl2bash #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2.5e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 40\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ 63, 142, 5, 48 ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #en #dataset-jiacheng-ye/nl2bash #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2.5e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 40\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 5\n* num\\_epochs: 2### Training results### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
reinforcement-learning
stable-baselines3
# **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "239.60 +/- 23.52", "name": "mean_reward", "verified": false}]}]}]}
emmermarcell/ppo-LunarLander-v2
null
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-05-01T11:50:33+00:00
[]
[]
TAGS #stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# PPO Agent playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ 31, 35, 17 ]
[ "TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.## Usage (with Stable-baselines3)\nTODO: Add your code" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Finetune-test3 This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6143 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1515 | 0.9956 | 56 | 0.7617 | | 0.7099 | 1.9911 | 112 | 0.6591 | | 0.6427 | 2.9867 | 168 | 0.6336 | | 0.5897 | 4.0 | 225 | 0.6145 | | 0.5634 | 4.9956 | 281 | 0.6038 | | 0.5328 | 5.9911 | 337 | 0.6003 | | 0.5084 | 6.9867 | 393 | 0.6019 | | 0.4793 | 8.0 | 450 | 0.6030 | | 0.4718 | 8.9956 | 506 | 0.6088 | | 0.4559 | 9.9556 | 560 | 0.6143 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.0.1+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "model-index": [{"name": "Finetune-test3", "results": []}]}
AmaanUsmani/Finetune-test3
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:TheBloke/Mistral-7B-Instruct-v0.2-GPTQ", "license:apache-2.0", "region:us" ]
null
2024-05-01T11:53:42+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us
Finetune-test3 ============== This model is a fine-tuned version of TheBloke/Mistral-7B-Instruct-v0.2-GPTQ on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.6143 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2 * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.40.1 * Pytorch 2.0.1+cu118 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.0.1+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.0.1+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 52, 151, 5, 52 ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-TheBloke/Mistral-7B-Instruct-v0.2-GPTQ #license-apache-2.0 #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.40.1\n* Pytorch 2.0.1+cu118\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1-4x4-no_slippery** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1-4x4-no_slippery** . ## Usage model = load_from_hub(repo_id="ws11yrin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
ws11yrin/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-05-01T11:53:46+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1-4x4-no_slippery This is a trained model of a Q-Learning agent playing FrozenLake-v1-4x4-no_slippery . ## Usage model = load_from_hub(repo_id="ws11yrin/q-FrozenLake-v1-4x4-noSlippery", filename="URL") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = URL(model["env_id"])
[ "# Q-Learning Agent playing1 FrozenLake-v1-4x4-no_slippery\n This is a trained model of a Q-Learning agent playing FrozenLake-v1-4x4-no_slippery .\n\n ## Usage\n\n model = load_from_hub(repo_id=\"ws11yrin/q-FrozenLake-v1-4x4-noSlippery\", filename=\"URL\")\n\n # Don't forget to check if you need to add additional attributes (is_slippery=False etc)\n env = URL(model[\"env_id\"])" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1-4x4-no_slippery\n This is a trained model of a Q-Learning agent playing FrozenLake-v1-4x4-no_slippery .\n\n ## Usage\n\n model = load_from_hub(repo_id=\"ws11yrin/q-FrozenLake-v1-4x4-noSlippery\", filename=\"URL\")\n\n # Don't forget to check if you need to add additional attributes (is_slippery=False etc)\n env = URL(model[\"env_id\"])" ]
[ 35, 133 ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n# Q-Learning Agent playing1 FrozenLake-v1-4x4-no_slippery\n This is a trained model of a Q-Learning agent playing FrozenLake-v1-4x4-no_slippery .\n\n ## Usage\n\n model = load_from_hub(repo_id=\"ws11yrin/q-FrozenLake-v1-4x4-noSlippery\", filename=\"URL\")\n\n # Don't forget to check if you need to add additional attributes (is_slippery=False etc)\n env = URL(model[\"env_id\"])" ]
null
transformers
# GGUF / IQ / Imatrix for [Spicy-Laymonade-7B](https://huggingface.co/ABX-AI/Spicy-Laymonade-7B) Adding GGUF as I noticed the HF model had a lot of downloads but I never quantized it originally. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d936ad52eca001fdcd3245/bMW7mRqBS_xQJBXn-szWS.png) **Why Importance Matrix?** **Importance Matrix**, at least based on my testing, has shown to improve the output and performance of "IQ"-type quantizations, where the compression becomes quite heavy. The **Imatrix** performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied. Related discussions in Github: [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006) [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) The imatrix.txt file that I used contains general, semi-random data, with some custom kink. # Spicy-Laymonade-7B Well, we have Laymonade, so why not spice it up? This merge is a step into creating a new 9B. However, I did try it out, and it seemed to work pretty well. ## Merge Details This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [cgato/TheSpice-7b-v0.1.1](https://huggingface.co/cgato/TheSpice-7b-v0.1.1) * [ABX-AI/Laymonade-7B](https://huggingface.co/ABX-AI/Laymonade-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: cgato/TheSpice-7b-v0.1.1 layer_range: [0, 32] - model: ABX-AI/Laymonade-7B layer_range: [0, 32] merge_method: slerp base_model: ABX-AI/Laymonade-7B parameters: t: - filter: self_attn value: [0.7, 0.3, 0.6, 0.2, 0.5] - filter: mlp value: [0.3, 0.7, 0.4, 0.8, 0.5] - value: 0.5 dtype: bfloat16 ```
{"license": "other", "library_name": "transformers", "tags": ["mergekit", "merge", "not-for-all-audiences"], "base_model": ["cgato/TheSpice-7b-v0.1.1", "ABX-AI/Laymonade-7B"]}
ABX-AI/Spicy-Laymonade-7B-GGUF-IQ-Imatrix
null
[ "transformers", "gguf", "mergekit", "merge", "not-for-all-audiences", "base_model:cgato/TheSpice-7b-v0.1.1", "base_model:ABX-AI/Laymonade-7B", "license:other", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:55:18+00:00
[]
[]
TAGS #transformers #gguf #mergekit #merge #not-for-all-audiences #base_model-cgato/TheSpice-7b-v0.1.1 #base_model-ABX-AI/Laymonade-7B #license-other #endpoints_compatible #region-us
# GGUF / IQ / Imatrix for Spicy-Laymonade-7B Adding GGUF as I noticed the HF model had a lot of downloads but I never quantized it originally. !image/png Why Importance Matrix? Importance Matrix, at least based on my testing, has shown to improve the output and performance of "IQ"-type quantizations, where the compression becomes quite heavy. The Imatrix performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied. Related discussions in Github: [[1]](URL [[2]](URL The URL file that I used contains general, semi-random data, with some custom kink. # Spicy-Laymonade-7B Well, we have Laymonade, so why not spice it up? This merge is a step into creating a new 9B. However, I did try it out, and it seemed to work pretty well. ## Merge Details This is a merge of pre-trained language models created using mergekit. ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * cgato/TheSpice-7b-v0.1.1 * ABX-AI/Laymonade-7B ### Configuration The following YAML configuration was used to produce this model:
[ "# GGUF / IQ / Imatrix for Spicy-Laymonade-7B\n\nAdding GGUF as I noticed the HF model had a lot of downloads but I never quantized it originally.\n\n!image/png\n\nWhy Importance Matrix?\n\nImportance Matrix, at least based on my testing, has shown to improve the output and performance of \"IQ\"-type quantizations, where the compression becomes quite heavy.\nThe Imatrix performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied.\n\nRelated discussions in Github:\n[[1]](URL [[2]](URL\n\nThe URL file that I used contains general, semi-random data, with some custom kink.", "# Spicy-Laymonade-7B\n\nWell, we have Laymonade, so why not spice it up? This merge is a step into creating a new 9B.\n\nHowever, I did try it out, and it seemed to work pretty well.", "## Merge Details\n\nThis is a merge of pre-trained language models created using mergekit.", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* cgato/TheSpice-7b-v0.1.1\n* ABX-AI/Laymonade-7B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #gguf #mergekit #merge #not-for-all-audiences #base_model-cgato/TheSpice-7b-v0.1.1 #base_model-ABX-AI/Laymonade-7B #license-other #endpoints_compatible #region-us \n", "# GGUF / IQ / Imatrix for Spicy-Laymonade-7B\n\nAdding GGUF as I noticed the HF model had a lot of downloads but I never quantized it originally.\n\n!image/png\n\nWhy Importance Matrix?\n\nImportance Matrix, at least based on my testing, has shown to improve the output and performance of \"IQ\"-type quantizations, where the compression becomes quite heavy.\nThe Imatrix performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied.\n\nRelated discussions in Github:\n[[1]](URL [[2]](URL\n\nThe URL file that I used contains general, semi-random data, with some custom kink.", "# Spicy-Laymonade-7B\n\nWell, we have Laymonade, so why not spice it up? This merge is a step into creating a new 9B.\n\nHowever, I did try it out, and it seemed to work pretty well.", "## Merge Details\n\nThis is a merge of pre-trained language models created using mergekit.", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* cgato/TheSpice-7b-v0.1.1\n* ABX-AI/Laymonade-7B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ 71, 171, 52, 19, 17, 44, 16 ]
[ "TAGS\n#transformers #gguf #mergekit #merge #not-for-all-audiences #base_model-cgato/TheSpice-7b-v0.1.1 #base_model-ABX-AI/Laymonade-7B #license-other #endpoints_compatible #region-us \n# GGUF / IQ / Imatrix for Spicy-Laymonade-7B\n\nAdding GGUF as I noticed the HF model had a lot of downloads but I never quantized it originally.\n\n!image/png\n\nWhy Importance Matrix?\n\nImportance Matrix, at least based on my testing, has shown to improve the output and performance of \"IQ\"-type quantizations, where the compression becomes quite heavy.\nThe Imatrix performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied.\n\nRelated discussions in Github:\n[[1]](URL [[2]](URL\n\nThe URL file that I used contains general, semi-random data, with some custom kink.# Spicy-Laymonade-7B\n\nWell, we have Laymonade, so why not spice it up? This merge is a step into creating a new 9B.\n\nHowever, I did try it out, and it seemed to work pretty well.## Merge Details\n\nThis is a merge of pre-trained language models created using mergekit.### Merge Method\n\nThis model was merged using the SLERP merge method.### Models Merged\n\nThe following models were included in the merge:\n* cgato/TheSpice-7b-v0.1.1\n* ABX-AI/Laymonade-7B### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - naripok/corgy_dog_LoRA <Gallery /> ## Model description These are naripok/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of TOK dog to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](naripok/corgy_dog_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of TOK dog", "widget": []}
naripok/corgy_dog_LoRA
null
[ "diffusers", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-05-01T11:56:25+00:00
[]
[]
TAGS #diffusers #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# SDXL LoRA DreamBooth - naripok/corgy_dog_LoRA <Gallery /> ## Model description These are naripok/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of TOK dog to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# SDXL LoRA DreamBooth - naripok/corgy_dog_LoRA\n\n<Gallery />", "## Model description\n\nThese are naripok/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of TOK dog to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# SDXL LoRA DreamBooth - naripok/corgy_dog_LoRA\n\n<Gallery />", "## Model description\n\nThese are naripok/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of TOK dog to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ 70, 25, 85, 19, 25, 6, 7, 23, 17 ]
[ "TAGS\n#diffusers #text-to-image #diffusers-training #lora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n# SDXL LoRA DreamBooth - naripok/corgy_dog_LoRA\n\n<Gallery />## Model description\n\nThese are naripok/corgy_dog_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.## Trigger words\n\nYou should use a photo of TOK dog to trigger the image generation.## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.## Intended uses & limitations#### How to use#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]## Training details\n\n[TODO: describe the data used to train the model]" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlm-funsd This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset. It achieves the following results on the evaluation set: - Loss: 0.6771 - Answer: {'precision': 0.7107258938244854, 'recall': 0.8108776266996292, 'f1': 0.7575057736720554, 'number': 809} - Header: {'precision': 0.3543307086614173, 'recall': 0.37815126050420167, 'f1': 0.3658536585365853, 'number': 119} - Question: {'precision': 0.7716814159292036, 'recall': 0.8187793427230047, 'f1': 0.7945330296127562, 'number': 1065} - Overall Precision: 0.7216 - Overall Recall: 0.7893 - Overall F1: 0.7539 - Overall Accuracy: 0.8139 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 1.8027 | 1.0 | 10 | 1.5884 | {'precision': 0.01997780244173141, 'recall': 0.022249690976514216, 'f1': 0.02105263157894737, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.18858307849133538, 'recall': 0.17370892018779344, 'f1': 0.18084066471163246, 'number': 1065} | 0.1079 | 0.1019 | 0.1048 | 0.3753 | | 1.4071 | 2.0 | 20 | 1.2076 | {'precision': 0.23890339425587467, 'recall': 0.22620519159456118, 'f1': 0.23238095238095238, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.41302791696492486, 'recall': 0.5417840375586854, 'f1': 0.4687246141348498, 'number': 1065} | 0.3512 | 0.3813 | 0.3656 | 0.5772 | | 1.0593 | 3.0 | 30 | 0.9154 | {'precision': 0.4750542299349241, 'recall': 0.5414091470951793, 'f1': 0.5060658578856152, 'number': 809} | {'precision': 0.11363636363636363, 'recall': 0.04201680672268908, 'f1': 0.06134969325153375, 'number': 119} | {'precision': 0.5922493681550126, 'recall': 0.6600938967136151, 'f1': 0.6243339253996447, 'number': 1065} | 0.5323 | 0.5750 | 0.5528 | 0.7136 | | 0.802 | 4.0 | 40 | 0.7552 | {'precision': 0.5981404958677686, 'recall': 0.715698393077874, 'f1': 0.6516601012943164, 'number': 809} | {'precision': 0.20253164556962025, 'recall': 0.13445378151260504, 'f1': 0.1616161616161616, 'number': 119} | {'precision': 0.6680707666385847, 'recall': 0.7446009389671362, 'f1': 0.7042628774422734, 'number': 1065} | 0.6213 | 0.6964 | 0.6567 | 0.7659 | | 0.6561 | 5.0 | 50 | 0.7030 | {'precision': 0.6381856540084389, 'recall': 0.7478368355995055, 'f1': 0.6886738759248718, 'number': 809} | {'precision': 0.3, 'recall': 0.226890756302521, 'f1': 0.25837320574162675, 'number': 119} | {'precision': 0.6780766096169519, 'recall': 0.7812206572769953, 'f1': 0.7260034904013962, 'number': 1065} | 0.6464 | 0.7346 | 0.6876 | 0.7889 | | 0.5591 | 6.0 | 60 | 0.6842 | {'precision': 0.6502100840336135, 'recall': 0.765142150803461, 'f1': 0.7030096536059057, 'number': 809} | {'precision': 0.3132530120481928, 'recall': 0.2184873949579832, 'f1': 0.25742574257425743, 'number': 119} | {'precision': 0.7165820642978004, 'recall': 0.7953051643192488, 'f1': 0.7538940809968847, 'number': 1065} | 0.6730 | 0.7486 | 0.7088 | 0.7942 | | 0.4858 | 7.0 | 70 | 0.6508 | {'precision': 0.6569948186528497, 'recall': 0.7836835599505563, 'f1': 0.7147688838782412, 'number': 809} | {'precision': 0.34210526315789475, 'recall': 0.3277310924369748, 'f1': 0.33476394849785407, 'number': 119} | {'precision': 0.7205503009458297, 'recall': 0.7868544600938967, 'f1': 0.7522441651705565, 'number': 1065} | 0.6740 | 0.7582 | 0.7136 | 0.8063 | | 0.431 | 8.0 | 80 | 0.6674 | {'precision': 0.6578140960163432, 'recall': 0.796044499381953, 'f1': 0.7203579418344519, 'number': 809} | {'precision': 0.35964912280701755, 'recall': 0.3445378151260504, 'f1': 0.351931330472103, 'number': 119} | {'precision': 0.7482517482517482, 'recall': 0.8037558685446009, 'f1': 0.775011317338162, 'number': 1065} | 0.6889 | 0.7732 | 0.7286 | 0.7969 | | 0.3878 | 9.0 | 90 | 0.6526 | {'precision': 0.6787564766839378, 'recall': 0.8096415327564895, 'f1': 0.7384441939120632, 'number': 809} | {'precision': 0.336283185840708, 'recall': 0.31932773109243695, 'f1': 0.32758620689655166, 'number': 119} | {'precision': 0.7586206896551724, 'recall': 0.7849765258215963, 'f1': 0.7715736040609138, 'number': 1065} | 0.7014 | 0.7672 | 0.7328 | 0.8073 | | 0.3744 | 10.0 | 100 | 0.6519 | {'precision': 0.6854410201912858, 'recall': 0.7972805933250927, 'f1': 0.7371428571428571, 'number': 809} | {'precision': 0.3130434782608696, 'recall': 0.3025210084033613, 'f1': 0.3076923076923077, 'number': 119} | {'precision': 0.7611940298507462, 'recall': 0.8140845070422535, 'f1': 0.7867513611615246, 'number': 1065} | 0.7052 | 0.7767 | 0.7393 | 0.8120 | | 0.3161 | 11.0 | 110 | 0.6696 | {'precision': 0.6948257655755016, 'recall': 0.8133498145859085, 'f1': 0.7494305239179954, 'number': 809} | {'precision': 0.3283582089552239, 'recall': 0.3697478991596639, 'f1': 0.34782608695652173, 'number': 119} | {'precision': 0.7604166666666666, 'recall': 0.8225352112676056, 'f1': 0.7902571041948578, 'number': 1065} | 0.7067 | 0.7918 | 0.7468 | 0.8060 | | 0.3039 | 12.0 | 120 | 0.6656 | {'precision': 0.7007534983853606, 'recall': 0.8046971569839307, 'f1': 0.7491369390103566, 'number': 809} | {'precision': 0.3524590163934426, 'recall': 0.36134453781512604, 'f1': 0.35684647302904565, 'number': 119} | {'precision': 0.7695769576957696, 'recall': 0.8028169014084507, 'f1': 0.7858455882352942, 'number': 1065} | 0.7165 | 0.7772 | 0.7456 | 0.8131 | | 0.2877 | 13.0 | 130 | 0.6742 | {'precision': 0.6927138331573389, 'recall': 0.8108776266996292, 'f1': 0.7471526195899771, 'number': 809} | {'precision': 0.32592592592592595, 'recall': 0.3697478991596639, 'f1': 0.3464566929133859, 'number': 119} | {'precision': 0.7651715039577837, 'recall': 0.8169014084507042, 'f1': 0.7901907356948229, 'number': 1065} | 0.7075 | 0.7878 | 0.7455 | 0.8109 | | 0.2681 | 14.0 | 140 | 0.6743 | {'precision': 0.7128927410617552, 'recall': 0.8133498145859085, 'f1': 0.7598152424942264, 'number': 809} | {'precision': 0.36220472440944884, 'recall': 0.3865546218487395, 'f1': 0.37398373983739847, 'number': 119} | {'precision': 0.7734513274336283, 'recall': 0.8206572769953052, 'f1': 0.7963553530751709, 'number': 1065} | 0.7239 | 0.7918 | 0.7563 | 0.8148 | | 0.2609 | 15.0 | 150 | 0.6771 | {'precision': 0.7107258938244854, 'recall': 0.8108776266996292, 'f1': 0.7575057736720554, 'number': 809} | {'precision': 0.3543307086614173, 'recall': 0.37815126050420167, 'f1': 0.3658536585365853, 'number': 119} | {'precision': 0.7716814159292036, 'recall': 0.8187793427230047, 'f1': 0.7945330296127562, 'number': 1065} | 0.7216 | 0.7893 | 0.7539 | 0.8139 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["funsd"], "base_model": "microsoft/layoutlm-base-uncased", "model-index": [{"name": "layoutlm-funsd", "results": []}]}
leom21/layoutlm-funsd
null
[ "transformers", "tensorboard", "safetensors", "layoutlm", "token-classification", "generated_from_trainer", "dataset:funsd", "base_model:microsoft/layoutlm-base-uncased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T11:56:57+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #layoutlm #token-classification #generated_from_trainer #dataset-funsd #base_model-microsoft/layoutlm-base-uncased #license-mit #autotrain_compatible #endpoints_compatible #region-us
layoutlm-funsd ============== This model is a fine-tuned version of microsoft/layoutlm-base-uncased on the funsd dataset. It achieves the following results on the evaluation set: * Loss: 0.6771 * Answer: {'precision': 0.7107258938244854, 'recall': 0.8108776266996292, 'f1': 0.7575057736720554, 'number': 809} * Header: {'precision': 0.3543307086614173, 'recall': 0.37815126050420167, 'f1': 0.3658536585365853, 'number': 119} * Question: {'precision': 0.7716814159292036, 'recall': 0.8187793427230047, 'f1': 0.7945330296127562, 'number': 1065} * Overall Precision: 0.7216 * Overall Recall: 0.7893 * Overall F1: 0.7539 * Overall Accuracy: 0.8139 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 15 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.3.0+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #layoutlm #token-classification #generated_from_trainer #dataset-funsd #base_model-microsoft/layoutlm-base-uncased #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 61, 112, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #layoutlm #token-classification #generated_from_trainer #dataset-funsd #base_model-microsoft/layoutlm-base-uncased #license-mit #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.3.0+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
다른 가짜 컨텍 확장과 다른 아직까지 유일무이한 라마3 70b inst 32k.
{"license": "other", "license_name": "llama3", "license_link": "LICENSE"}
maywell/Llama-3-70B-Instruct-32k
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T11:59:47+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
다른 가짜 컨텍 확장과 다른 아직까지 유일무이한 라마3 70b inst 32k.
[]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 41 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny chinese - VingeNie This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 16.1 dataset. It achieves the following results on the evaluation set: - Loss: 0.7486 - Cer Ortho: 54.2817 - Cer: 29.8525 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 25 - training_steps: 650 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer Ortho | Cer | |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:| | 0.8045 | 0.3594 | 225 | 0.8040 | 53.9956 | 31.6580 | | 0.7595 | 0.7188 | 450 | 0.7486 | 54.2817 | 29.8525 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["zh"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_16_1"], "base_model": "openai/whisper-tiny", "model-index": [{"name": "Whisper Tiny chinese - VingeNie", "results": []}]}
VingeNie/whisper-tiny-zh_CN_lr5
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "zh", "dataset:mozilla-foundation/common_voice_16_1", "base_model:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T12:00:23+00:00
[]
[ "zh" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #zh #dataset-mozilla-foundation/common_voice_16_1 #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us
Whisper Tiny chinese - VingeNie =============================== This model is a fine-tuned version of openai/whisper-tiny on the Common Voice 16.1 dataset. It achieves the following results on the evaluation set: * Loss: 0.7486 * Cer Ortho: 54.2817 * Cer: 29.8525 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 64 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: constant\_with\_warmup * lr\_scheduler\_warmup\_steps: 25 * training\_steps: 650 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\\_with\\_warmup\n* lr\\_scheduler\\_warmup\\_steps: 25\n* training\\_steps: 650\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #zh #dataset-mozilla-foundation/common_voice_16_1 #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\\_with\\_warmup\n* lr\\_scheduler\\_warmup\\_steps: 25\n* training\\_steps: 650\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 72, 133, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #zh #dataset-mozilla-foundation/common_voice_16_1 #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\\_with\\_warmup\n* lr\\_scheduler\\_warmup\\_steps: 25\n* training\\_steps: 650\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tiny chinese - VingeNie This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 16.1 dataset. It achieves the following results on the evaluation set: - Loss: 0.7223 - Cer Ortho: 40.6344 - Cer: 29.5803 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 64 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant_with_warmup - lr_scheduler_warmup_steps: 25 - training_steps: 650 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer Ortho | Cer | |:-------------:|:------:|:----:|:---------------:|:---------:|:-------:| | 0.7743 | 0.3594 | 225 | 0.7942 | 40.4431 | 31.4481 | | 0.7344 | 0.7188 | 450 | 0.7223 | 40.6344 | 29.5803 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["zh"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_16_1"], "base_model": "openai/whisper-tiny", "model-index": [{"name": "Whisper Tiny chinese - VingeNie", "results": []}]}
VingeNie/whisper-tiny-zh_CN_lr4
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "zh", "dataset:mozilla-foundation/common_voice_16_1", "base_model:openai/whisper-tiny", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T12:00:31+00:00
[]
[ "zh" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #zh #dataset-mozilla-foundation/common_voice_16_1 #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us
Whisper Tiny chinese - VingeNie =============================== This model is a fine-tuned version of openai/whisper-tiny on the Common Voice 16.1 dataset. It achieves the following results on the evaluation set: * Loss: 0.7223 * Cer Ortho: 40.6344 * Cer: 29.5803 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 64 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: constant\_with\_warmup * lr\_scheduler\_warmup\_steps: 25 * training\_steps: 650 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\\_with\\_warmup\n* lr\\_scheduler\\_warmup\\_steps: 25\n* training\\_steps: 650\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #zh #dataset-mozilla-foundation/common_voice_16_1 #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\\_with\\_warmup\n* lr\\_scheduler\\_warmup\\_steps: 25\n* training\\_steps: 650\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 72, 133, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #zh #dataset-mozilla-foundation/common_voice_16_1 #base_model-openai/whisper-tiny #license-apache-2.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 64\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\\_with\\_warmup\n* lr\\_scheduler\\_warmup\\_steps: 25\n* training\\_steps: 650\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
youko-8bは、追加の日本語継続事前学習により日本語が大変流暢なLlama-3です。 Instructionモデルとの差分ベクトルマージを行いました。 > rinna/llama-3-youko-8b + 0.8*(meta-llama/Meta-Llama-3-8B-Instruct - meta-llama/Meta-Llama-3-8B) 詳細は[rinna/llama-3-youko-8b](https://huggingface.co/rinna/llama-3-youko-8b)をご確認ください。
{"license": "llama3"}
aixsatoshi/Llama-3-youko-8b-instruct-chatvector
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T12:00:50+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
youko-8bは、追加の日本語継続事前学習により日本語が大変流暢なLlama-3です。 Instructionモデルとの差分ベクトルマージを行いました。 > rinna/llama-3-youko-8b + 0.8*(meta-llama/Meta-Llama-3-8B-Instruct - meta-llama/Meta-Llama-3-8B) 詳細はrinna/llama-3-youko-8bをご確認ください。
[]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
[ 43 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #license-llama3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-to-image
diffusers
# Insane Realistic 2 Original page: https://civitai.com/models/108585/ Samples and prompts: ![Free online AI image generator Insane Realistic 2](https://cdn-uploads.huggingface.co/production/uploads/63239b8370edc53f51cd5d42/rCpAdhQZzdBMuFRo2IDqC.png) (Click for larger) Top left: a cute girl with freckles on her face, cgsociety unreal engine, wet t-shirt, short skirt, style of aenami alena, trending on artstartion, inspired by Fyodor Vasilyev, looks a bit similar to amy adams, emissive light, fluffy orange skin, dribbble, dramatic rendering Top right: 90s grainy vhs still young mother loose shirt, headband. holding a baby, on the couch, posing, bow. bokeh, bright lighting. smile Bottom left: beautiful image of the first day of creation of the world and planet earth in the dark deep space, light and darkness separated, planets, under a black night sky of astronomical glittering starlight in the outer reaches of the solar system beyond, trending on artstation, octane render, symmetry by raqib shaw, presence of god, eye of god. Bottom right: hill, mountains, sunset, field, world, ocean, trees, underground, city, village, path, urban, mountain, buildings, waterfall, skyline, nature, town, industrial, architecture, road, jungle, valley, bridge, horizon, landscape, house, building, environment, wilderness, enviroment, river, cave, desert, forest
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["Base Model", "Realism", "Female", "Woman", "cordonsolution8", "stable-diffusion", "stable-diffusion-diffusers", "diffusers", "text-to-image"], "pipeline_tag": "text-to-image"}
Yntec/insaneRealistic_v2
null
[ "diffusers", "safetensors", "Base Model", "Realism", "Female", "Woman", "cordonsolution8", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us", "has_space" ]
null
2024-05-01T12:01:07+00:00
[]
[]
TAGS #diffusers #safetensors #Base Model #Realism #Female #Woman #cordonsolution8 #stable-diffusion #stable-diffusion-diffusers #text-to-image #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us #has_space
# Insane Realistic 2 Original page: URL Samples and prompts: !Free online AI image generator Insane Realistic 2 (Click for larger) Top left: a cute girl with freckles on her face, cgsociety unreal engine, wet t-shirt, short skirt, style of aenami alena, trending on artstartion, inspired by Fyodor Vasilyev, looks a bit similar to amy adams, emissive light, fluffy orange skin, dribbble, dramatic rendering Top right: 90s grainy vhs still young mother loose shirt, headband. holding a baby, on the couch, posing, bow. bokeh, bright lighting. smile Bottom left: beautiful image of the first day of creation of the world and planet earth in the dark deep space, light and darkness separated, planets, under a black night sky of astronomical glittering starlight in the outer reaches of the solar system beyond, trending on artstation, octane render, symmetry by raqib shaw, presence of god, eye of god. Bottom right: hill, mountains, sunset, field, world, ocean, trees, underground, city, village, path, urban, mountain, buildings, waterfall, skyline, nature, town, industrial, architecture, road, jungle, valley, bridge, horizon, landscape, house, building, environment, wilderness, enviroment, river, cave, desert, forest
[ "# Insane Realistic 2\n\nOriginal page: URL\n\nSamples and prompts:\n\n!Free online AI image generator Insane Realistic 2\n\n(Click for larger)\n\nTop left: a cute girl with freckles on her face, cgsociety unreal engine, wet t-shirt, short skirt, style of aenami alena, trending on artstartion, inspired by Fyodor Vasilyev, looks a bit similar to amy adams, emissive light, fluffy orange skin, dribbble, dramatic rendering\n\nTop right: 90s grainy vhs still young mother loose shirt, headband. holding a baby, on the couch, posing, bow. bokeh, bright lighting. smile\n\nBottom left: beautiful image of the first day of creation of the world and planet earth in the dark deep space, light and darkness separated, planets, under a black night sky of astronomical glittering starlight in the outer reaches of the solar system beyond, trending on artstation, octane render, symmetry by raqib shaw, presence of god, eye of god.\n\nBottom right: hill, mountains, sunset, field, world, ocean, trees, underground, city, village, path, urban, mountain, buildings, waterfall, skyline, nature, town, industrial, architecture, road, jungle, valley, bridge, horizon, landscape, house, building, environment, wilderness, enviroment, river, cave, desert, forest" ]
[ "TAGS\n#diffusers #safetensors #Base Model #Realism #Female #Woman #cordonsolution8 #stable-diffusion #stable-diffusion-diffusers #text-to-image #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us #has_space \n", "# Insane Realistic 2\n\nOriginal page: URL\n\nSamples and prompts:\n\n!Free online AI image generator Insane Realistic 2\n\n(Click for larger)\n\nTop left: a cute girl with freckles on her face, cgsociety unreal engine, wet t-shirt, short skirt, style of aenami alena, trending on artstartion, inspired by Fyodor Vasilyev, looks a bit similar to amy adams, emissive light, fluffy orange skin, dribbble, dramatic rendering\n\nTop right: 90s grainy vhs still young mother loose shirt, headband. holding a baby, on the couch, posing, bow. bokeh, bright lighting. smile\n\nBottom left: beautiful image of the first day of creation of the world and planet earth in the dark deep space, light and darkness separated, planets, under a black night sky of astronomical glittering starlight in the outer reaches of the solar system beyond, trending on artstation, octane render, symmetry by raqib shaw, presence of god, eye of god.\n\nBottom right: hill, mountains, sunset, field, world, ocean, trees, underground, city, village, path, urban, mountain, buildings, waterfall, skyline, nature, town, industrial, architecture, road, jungle, valley, bridge, horizon, landscape, house, building, environment, wilderness, enviroment, river, cave, desert, forest" ]
[ 73, 291 ]
[ "TAGS\n#diffusers #safetensors #Base Model #Realism #Female #Woman #cordonsolution8 #stable-diffusion #stable-diffusion-diffusers #text-to-image #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us #has_space \n# Insane Realistic 2\n\nOriginal page: URL\n\nSamples and prompts:\n\n!Free online AI image generator Insane Realistic 2\n\n(Click for larger)\n\nTop left: a cute girl with freckles on her face, cgsociety unreal engine, wet t-shirt, short skirt, style of aenami alena, trending on artstartion, inspired by Fyodor Vasilyev, looks a bit similar to amy adams, emissive light, fluffy orange skin, dribbble, dramatic rendering\n\nTop right: 90s grainy vhs still young mother loose shirt, headband. holding a baby, on the couch, posing, bow. bokeh, bright lighting. smile\n\nBottom left: beautiful image of the first day of creation of the world and planet earth in the dark deep space, light and darkness separated, planets, under a black night sky of astronomical glittering starlight in the outer reaches of the solar system beyond, trending on artstation, octane render, symmetry by raqib shaw, presence of god, eye of god.\n\nBottom right: hill, mountains, sunset, field, world, ocean, trees, underground, city, village, path, urban, mountain, buildings, waterfall, skyline, nature, town, industrial, architecture, road, jungle, valley, bridge, horizon, landscape, house, building, environment, wilderness, enviroment, river, cave, desert, forest" ]
text-to-image
diffusers
# AutoTrain SDXL LoRA DreamBooth - rshir/sdxl-lora-rshir <Gallery /> ## Model description These are rshir/sdxl-lora-rshir LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use A photo of Roman Shirochenko wearing casual clothes, taking a selfie, and smiling. to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](rshir/sdxl-lora-rshir/tree/main) them in the Files & versions tab.
{"license": "openrail++", "tags": ["autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "A photo of Roman Shirochenko wearing casual clothes, taking a selfie, and smiling."}
rshir/sdxl-lora-rshir
null
[ "diffusers", "autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-05-01T12:01:20+00:00
[]
[]
TAGS #diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# AutoTrain SDXL LoRA DreamBooth - rshir/sdxl-lora-rshir <Gallery /> ## Model description These are rshir/sdxl-lora-rshir LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use A photo of Roman Shirochenko wearing casual clothes, taking a selfie, and smiling. to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
[ "# AutoTrain SDXL LoRA DreamBooth - rshir/sdxl-lora-rshir\n\n<Gallery />", "## Model description\n\nThese are rshir/sdxl-lora-rshir LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.", "## Trigger words\n\nYou should use A photo of Roman Shirochenko wearing casual clothes, taking a selfie, and smiling. to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
[ "TAGS\n#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# AutoTrain SDXL LoRA DreamBooth - rshir/sdxl-lora-rshir\n\n<Gallery />", "## Model description\n\nThese are rshir/sdxl-lora-rshir LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.", "## Trigger words\n\nYou should use A photo of Roman Shirochenko wearing casual clothes, taking a selfie, and smiling. to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
[ 68, 28, 70, 32, 25 ]
[ "TAGS\n#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n# AutoTrain SDXL LoRA DreamBooth - rshir/sdxl-lora-rshir\n\n<Gallery />## Model description\n\nThese are rshir/sdxl-lora-rshir LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.## Trigger words\n\nYou should use A photo of Roman Shirochenko wearing casual clothes, taking a selfie, and smiling. to trigger the image generation.## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
AbhishekG13/phi-3-alto-shaam-qna
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T12:02:39+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #phi3 #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 45, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #phi3 #text-generation #conversational #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Image-Arousal-new This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6535 - Accuracy: 0.4591 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------:| | 1.2322 | 0.1855 | 100 | 1.2411 | 0.4452 | | 1.1613 | 0.3711 | 200 | 1.2600 | 0.3987 | | 1.2851 | 0.5566 | 300 | 1.2428 | 0.4052 | | 1.1931 | 0.7421 | 400 | 1.2041 | 0.4559 | | 1.1098 | 0.9276 | 500 | 1.1918 | 0.4586 | | 1.1714 | 1.1132 | 600 | 1.1806 | 0.4721 | | 1.1216 | 1.2987 | 700 | 1.1692 | 0.4651 | | 1.2208 | 1.4842 | 800 | 1.1801 | 0.4614 | | 1.0644 | 1.6698 | 900 | 1.1775 | 0.4596 | | 1.1638 | 1.8553 | 1000 | 1.2031 | 0.4721 | | 0.9559 | 2.0408 | 1100 | 1.2392 | 0.4521 | | 0.8442 | 2.2263 | 1200 | 1.2544 | 0.4661 | | 0.8713 | 2.4119 | 1300 | 1.2792 | 0.4744 | | 0.8442 | 2.5974 | 1400 | 1.2618 | 0.4647 | | 0.831 | 2.7829 | 1500 | 1.3202 | 0.4554 | | 0.7774 | 2.9685 | 1600 | 1.3087 | 0.4572 | | 0.5501 | 3.1540 | 1700 | 1.4975 | 0.4600 | | 0.6069 | 3.3395 | 1800 | 1.5869 | 0.4512 | | 0.4397 | 3.5250 | 1900 | 1.6458 | 0.4387 | | 0.4468 | 3.7106 | 2000 | 1.6341 | 0.4493 | | 0.4198 | 3.8961 | 2100 | 1.6535 | 0.4591 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "Image-Arousal-new", "results": []}]}
SeyedAli/Image-Arousal-new
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T12:08:37+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
Image-Arousal-new ================= This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.6535 * Accuracy: 0.4591 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 4 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 65, 101, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_withdpo_4iters_bs256_432lr_iter_2 This model is a fine-tuned version of [ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1](https://huggingface.co/ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1) on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1", "model-index": [{"name": "0.001_withdpo_4iters_bs256_432lr_iter_2", "results": []}]}
ShenaoZ/0.001_withdpo_4iters_bs256_432lr_iter_2
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:updated", "dataset:original", "base_model:ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T12:08:52+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.001_withdpo_4iters_bs256_432lr_iter_2 This model is a fine-tuned version of ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1 on the updated and the original datasets. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.001_withdpo_4iters_bs256_432lr_iter_2\n\nThis model is a fine-tuned version of ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 4e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.001_withdpo_4iters_bs256_432lr_iter_2\n\nThis model is a fine-tuned version of ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1 on the updated and the original datasets.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 4e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ 99, 71, 7, 9, 9, 4, 155, 5, 44 ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-updated #dataset-original #base_model-ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# 0.001_withdpo_4iters_bs256_432lr_iter_2\n\nThis model is a fine-tuned version of ZhangShenao/0.0_ablation_sample1_4iters_bs256_iter_1 on the updated and the original datasets.## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 4e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1### Training results### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed. ```bash pip install transformers==4.40.1 ``` Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo. - Either leave `token=True` in the `pipeline` and login to hugginface_hub by running ```python import huggingface_hub huggingface_hub.login(<ACCESS_TOKEN>) ``` - Or directly pass your <ACCESS_TOKEN> to `token` in the `pipeline` ```python from transformers import pipeline generate_text = pipeline( model="Aaryan-Nakhat/experiment-41-intelligent-layer-2-plus-exp-39-data", torch_dtype="auto", trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, token=True, ) # generate configuration can be modified to your needs # generate_text.model.generation_config.min_new_tokens = 1 # generate_text.model.generation_config.max_new_tokens = 192 # generate_text.model.generation_config.do_sample = True # generate_text.model.generation_config.num_beams = 1 # generate_text.model.generation_config.temperature = float(0.3) # generate_text.model.generation_config.repetition_penalty = float(1.2) res = generate_text( "Why is drinking water so healthy?", renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?<|end_of_text|><|answer|> ``` Alternatively, you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the `transformers` package, this will allow you to set `trust_remote_code=False`. ```python from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "Aaryan-Nakhat/experiment-41-intelligent-layer-2-plus-exp-39-data", use_fast=True, padding_side="left", trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( "Aaryan-Nakhat/experiment-41-intelligent-layer-2-plus-exp-39-data", torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) # generate configuration can be modified to your needs # generate_text.model.generation_config.min_new_tokens = 1 # generate_text.model.generation_config.max_new_tokens = 192 # generate_text.model.generation_config.do_sample = True # generate_text.model.generation_config.num_beams = 1 # generate_text.model.generation_config.temperature = float(0.3) # generate_text.model.generation_config.repetition_penalty = float(1.2) res = generate_text( "Why is drinking water so healthy?", renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Aaryan-Nakhat/experiment-41-intelligent-layer-2-plus-exp-39-data" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?<|end_of_text|><|answer|>" tokenizer = AutoTokenizer.from_pretrained( model_name, use_fast=True, trust_remote_code=True, ) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map={"": "cuda:0"}, trust_remote_code=True, ) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs # model.generation_config.min_new_tokens = 1 # model.generation_config.max_new_tokens = 192 # model.generation_config.do_sample = True # model.generation_config.num_beams = 1 # model.generation_config.temperature = float(0.3) # model.generation_config.repetition_penalty = float(1.2) tokens = model.generate( input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Quantization and sharding You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```. ## Model Architecture ``` LlamaForCausalLM( (model): LlamaModel( (embed_tokens): Embedding(128256, 4096, padding_idx=128001) (layers): ModuleList( (0-31): 32 x LlamaDecoderLayer( (self_attn): LlamaSdpaAttention( (q_proj): Linear(in_features=4096, out_features=4096, bias=False) (k_proj): Linear(in_features=4096, out_features=1024, bias=False) (v_proj): Linear(in_features=4096, out_features=1024, bias=False) (o_proj): Linear(in_features=4096, out_features=4096, bias=False) (rotary_emb): LlamaRotaryEmbedding() ) (mlp): LlamaMLP( (gate_proj): Linear(in_features=4096, out_features=14336, bias=False) (up_proj): Linear(in_features=4096, out_features=14336, bias=False) (down_proj): Linear(in_features=14336, out_features=4096, bias=False) (act_fn): SiLU() ) (input_layernorm): LlamaRMSNorm() (post_attention_layernorm): LlamaRMSNorm() ) ) (norm): LlamaRMSNorm() ) (lm_head): Linear(in_features=4096, out_features=128256, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
{"language": ["en"], "library_name": "transformers", "tags": ["gpt", "llm", "large language model", "h2o-llmstudio"], "inference": false, "thumbnail": "https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico"}
Aaryan-Nakhat/experiment-41-intelligent-layer-2-plus-exp-39-data
null
[ "transformers", "safetensors", "llama", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "conversational", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T12:09:38+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #gpt #llm #large language model #h2o-llmstudio #conversational #en #autotrain_compatible #text-generation-inference #region-us
# Model Card ## Summary This model was trained using H2O LLM Studio. - Base model: meta-llama/Meta-Llama-3-8B-Instruct ## Usage To use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' library installed. Also make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo. - Either leave 'token=True' in the 'pipeline' and login to hugginface_hub by running - Or directly pass your <ACCESS_TOKEN> to 'token' in the 'pipeline' You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: Alternatively, you can download h2oai_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the 'transformers' package, this will allow you to set 'trust_remote_code=False'. You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ## Quantization and sharding You can load the models using quantization by specifying or . Also, sharding on multiple GPUs is possible by setting . ## Model Architecture ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in URL. Visit H2O LLM Studio to learn how to train your own large language models. ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
[ "# Model Card", "## Summary\n\nThis model was trained using H2O LLM Studio.\n- Base model: meta-llama/Meta-Llama-3-8B-Instruct", "## Usage\n\nTo use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' library installed.\n\n\n\nAlso make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.\n - Either leave 'token=True' in the 'pipeline' and login to hugginface_hub by running\n \n - Or directly pass your <ACCESS_TOKEN> to 'token' in the 'pipeline'\n\n\n\nYou can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:\n\n\n\n\n\nAlternatively, you can download h2oai_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the 'transformers' package, this will allow you to set 'trust_remote_code=False'.\n\n\n\n\nYou may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:", "## Quantization and sharding\n\nYou can load the models using quantization by specifying or . Also, sharding on multiple GPUs is possible by setting .", "## Model Architecture", "## Model Configuration\n\nThis model was trained using H2O LLM Studio and with the configuration in URL. Visit H2O LLM Studio to learn how to train your own large language models.", "## Disclaimer\n\nPlease read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.\n\n- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.\n- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.\n- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.\n- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.\n- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.\n- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.\n\nBy using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #gpt #llm #large language model #h2o-llmstudio #conversational #en #autotrain_compatible #text-generation-inference #region-us \n", "# Model Card", "## Summary\n\nThis model was trained using H2O LLM Studio.\n- Base model: meta-llama/Meta-Llama-3-8B-Instruct", "## Usage\n\nTo use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' library installed.\n\n\n\nAlso make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.\n - Either leave 'token=True' in the 'pipeline' and login to hugginface_hub by running\n \n - Or directly pass your <ACCESS_TOKEN> to 'token' in the 'pipeline'\n\n\n\nYou can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:\n\n\n\n\n\nAlternatively, you can download h2oai_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the 'transformers' package, this will allow you to set 'trust_remote_code=False'.\n\n\n\n\nYou may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:", "## Quantization and sharding\n\nYou can load the models using quantization by specifying or . Also, sharding on multiple GPUs is possible by setting .", "## Model Architecture", "## Model Configuration\n\nThis model was trained using H2O LLM Studio and with the configuration in URL. Visit H2O LLM Studio to learn how to train your own large language models.", "## Disclaimer\n\nPlease read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.\n\n- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.\n- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.\n- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.\n- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.\n- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.\n- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.\n\nBy using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it." ]
[ 53, 3, 36, 213, 37, 4, 41, 447 ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #gpt #llm #large language model #h2o-llmstudio #conversational #en #autotrain_compatible #text-generation-inference #region-us \n# Model Card## Summary\n\nThis model was trained using H2O LLM Studio.\n- Base model: meta-llama/Meta-Llama-3-8B-Instruct## Usage\n\nTo use the model with the 'transformers' library on a machine with GPUs, first make sure you have the 'transformers' library installed.\n\n\n\nAlso make sure you are providing your huggingface token to the pipeline if the model is lying in a private repo.\n - Either leave 'token=True' in the 'pipeline' and login to hugginface_hub by running\n \n - Or directly pass your <ACCESS_TOKEN> to 'token' in the 'pipeline'\n\n\n\nYou can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer:\n\n\n\n\n\nAlternatively, you can download h2oai_pipeline.py, store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer. If the model and the tokenizer are fully supported in the 'transformers' package, this will allow you to set 'trust_remote_code=False'.\n\n\n\n\nYou may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps:## Quantization and sharding\n\nYou can load the models using quantization by specifying or . Also, sharding on multiple GPUs is possible by setting .## Model Architecture## Model Configuration\n\nThis model was trained using H2O LLM Studio and with the configuration in URL. Visit H2O LLM Studio to learn how to train your own large language models.## Disclaimer\n\nPlease read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.\n\n- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.\n- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.\n- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.\n- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.\n- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.\n- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.\n\nBy using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it." ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
research-dump/Fine_tuned_bert-base-uncased_TAQA_extension
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T12:12:10+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ 37, 6, 4, 75, 23, 3, 5, 8, 9, 8, 34, 20, 4, 5, 5, 11, 13, 12, 3, 10, 6, 5, 6, 4, 5, 7, 49, 7, 7, 5, 5, 15, 7, 7, 8, 5 ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n# Model Card for Model ID## Model Details### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Downstream Use [optional]### Out-of-Scope Use## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.## How to Get Started with the Model\n\nUse the code below to get started with the model.## Training Details### Training Data### Training Procedure#### Preprocessing [optional]#### Training Hyperparameters\n\n- Training regime:#### Speeds, Sizes, Times [optional]## Evaluation### Testing Data, Factors & Metrics#### Testing Data#### Factors#### Metrics### Results#### Summary## Model Examination [optional]## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:## Technical Specifications [optional]### Model Architecture and Objective### Compute Infrastructure#### Hardware#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Model Card Authors [optional]## Model Card Contact" ]
video-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-ucf101-subset This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2776 - Accuracy: 0.8429 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 300 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.7979 | 0.25 | 75 | 1.5073 | 0.3857 | | 0.6556 | 1.25 | 150 | 0.8398 | 0.6571 | | 0.3096 | 2.25 | 225 | 0.2880 | 0.8571 | | 0.1904 | 3.25 | 300 | 0.2776 | 0.8429 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "MCG-NJU/videomae-base", "model-index": [{"name": "videomae-base-finetuned-ucf101-subset", "results": []}]}
kkumtori/videomae-base-finetuned-ucf101-subset
null
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T12:13:17+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base #license-cc-by-nc-4.0 #endpoints_compatible #region-us
videomae-base-finetuned-ucf101-subset ===================================== This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.2776 * Accuracy: 0.8429 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * training\_steps: 300 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 300", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 300", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 61, 117, 5, 44 ]
[ "TAGS\n#transformers #tensorboard #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 300### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1-4x4** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1-4x4** . ## Usage model = load_from_hub(repo_id="ws11yrin/q-FrozenLake-v1-4x4", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"])
{"tags": ["FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4", "type": "FrozenLake-v1-4x4"}, "metrics": [{"type": "mean_reward", "value": "0.65 +/- 0.48", "name": "mean_reward", "verified": false}]}]}]}
ws11yrin/q-FrozenLake-v1-4x4
null
[ "FrozenLake-v1-4x4", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-05-01T12:14:41+00:00
[]
[]
TAGS #FrozenLake-v1-4x4 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1-4x4 This is a trained model of a Q-Learning agent playing FrozenLake-v1-4x4 . ## Usage model = load_from_hub(repo_id="ws11yrin/q-FrozenLake-v1-4x4", filename="URL") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = URL(model["env_id"])
[ "# Q-Learning Agent playing1 FrozenLake-v1-4x4\n This is a trained model of a Q-Learning agent playing FrozenLake-v1-4x4 .\n\n ## Usage\n\n model = load_from_hub(repo_id=\"ws11yrin/q-FrozenLake-v1-4x4\", filename=\"URL\")\n\n # Don't forget to check if you need to add additional attributes (is_slippery=False etc)\n env = URL(model[\"env_id\"])" ]
[ "TAGS\n#FrozenLake-v1-4x4 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1-4x4\n This is a trained model of a Q-Learning agent playing FrozenLake-v1-4x4 .\n\n ## Usage\n\n model = load_from_hub(repo_id=\"ws11yrin/q-FrozenLake-v1-4x4\", filename=\"URL\")\n\n # Don't forget to check if you need to add additional attributes (is_slippery=False etc)\n env = URL(model[\"env_id\"])" ]
[ 31, 120 ]
[ "TAGS\n#FrozenLake-v1-4x4 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n# Q-Learning Agent playing1 FrozenLake-v1-4x4\n This is a trained model of a Q-Learning agent playing FrozenLake-v1-4x4 .\n\n ## Usage\n\n model = load_from_hub(repo_id=\"ws11yrin/q-FrozenLake-v1-4x4\", filename=\"URL\")\n\n # Don't forget to check if you need to add additional attributes (is_slippery=False etc)\n env = URL(model[\"env_id\"])" ]
null
transformers
# Uploaded model - **Developed by:** Crysiss - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
Crysiss/llama3-8B-welfare-unsloth-last-2
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T12:14:54+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: Crysiss - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: Crysiss\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: Crysiss\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 64, 80 ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: Crysiss\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0604 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.5675 | 0.08 | 5000 | 1.7421 | | 1.6789 | 0.15 | 10000 | 1.5231 | | 1.5321 | 0.23 | 15000 | 1.4227 | | 1.4526 | 0.31 | 20000 | 1.3572 | | 1.3952 | 0.38 | 25000 | 1.3033 | | 1.3431 | 0.46 | 30000 | 1.2568 | | 1.2983 | 0.54 | 35000 | 1.2112 | | 1.2522 | 0.61 | 40000 | 1.1708 | | 1.2095 | 0.69 | 45000 | 1.1319 | | 1.1742 | 0.77 | 50000 | 1.0989 | | 1.1429 | 0.84 | 55000 | 1.0754 | | 1.1244 | 0.92 | 60000 | 1.0634 | | 1.1143 | 1.0 | 65000 | 1.0604 | ### Framework versions - Transformers 4.37.2 - Pytorch 1.13.1.post200 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "gpt2", "model-index": [{"name": "codeparrot-ds", "results": []}]}
mengren1942/codeparrot-ds
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T12:15:00+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
codeparrot-ds ============= This model is a fine-tuned version of gpt2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.0604 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 256 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 1000 * num\_epochs: 1 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.37.2 * Pytorch 1.13.1.post200 * Datasets 2.14.6 * Tokenizers 0.15.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 1.13.1.post200\n* Datasets 2.14.6\n* Tokenizers 0.15.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 1.13.1.post200\n* Datasets 2.14.6\n* Tokenizers 0.15.1" ]
[ 56, 153, 5, 43 ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 1\n* mixed\\_precision\\_training: Native AMP### Training results### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 1.13.1.post200\n* Datasets 2.14.6\n* Tokenizers 0.15.1" ]
null
mlx
# mlx-community/llama-3-youko-8b-8bit This model was converted to MLX format from [`rinna/llama-3-youko-8b`]() using mlx-lm version **0.12.1**. Refer to the [original model card](https://huggingface.co/rinna/llama-3-youko-8b) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/llama-3-youko-8b-8bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["ja", "en"], "license": "llama3", "tags": ["mlx"], "datasets": ["mc4", "wikipedia", "EleutherAI/pile", "oscar-corpus/colossal-oscar-1.0", "cc100"], "thumbnail": "https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png", "inference": false}
mlx-community/llama-3-youko-8b-8bit
null
[ "mlx", "safetensors", "llama", "ja", "en", "dataset:mc4", "dataset:wikipedia", "dataset:EleutherAI/pile", "dataset:oscar-corpus/colossal-oscar-1.0", "dataset:cc100", "license:llama3", "region:us" ]
null
2024-05-01T12:15:12+00:00
[]
[ "ja", "en" ]
TAGS #mlx #safetensors #llama #ja #en #dataset-mc4 #dataset-wikipedia #dataset-EleutherAI/pile #dataset-oscar-corpus/colossal-oscar-1.0 #dataset-cc100 #license-llama3 #region-us
# mlx-community/llama-3-youko-8b-8bit This model was converted to MLX format from ['rinna/llama-3-youko-8b']() using mlx-lm version 0.12.1. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/llama-3-youko-8b-8bit\nThis model was converted to MLX format from ['rinna/llama-3-youko-8b']() using mlx-lm version 0.12.1.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #llama #ja #en #dataset-mc4 #dataset-wikipedia #dataset-EleutherAI/pile #dataset-oscar-corpus/colossal-oscar-1.0 #dataset-cc100 #license-llama3 #region-us \n", "# mlx-community/llama-3-youko-8b-8bit\nThis model was converted to MLX format from ['rinna/llama-3-youko-8b']() using mlx-lm version 0.12.1.\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ 67, 73, 6 ]
[ "TAGS\n#mlx #safetensors #llama #ja #en #dataset-mc4 #dataset-wikipedia #dataset-EleutherAI/pile #dataset-oscar-corpus/colossal-oscar-1.0 #dataset-cc100 #license-llama3 #region-us \n# mlx-community/llama-3-youko-8b-8bit\nThis model was converted to MLX format from ['rinna/llama-3-youko-8b']() using mlx-lm version 0.12.1.\nRefer to the original model card for more details on the model.## Use with mlx" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_3_epochs_no_perturb This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1620 - Precision: 0.2876 - Recall: 0.3063 - F1: 0.2967 - Accuracy: 0.9558 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 103 | 0.1906 | 0.2105 | 0.1778 | 0.1928 | 0.9508 | | No log | 2.0 | 206 | 0.1676 | 0.2550 | 0.3016 | 0.2764 | 0.9534 | | No log | 3.0 | 309 | 0.1620 | 0.2876 | 0.3063 | 0.2967 | 0.9558 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.0+cpu - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "model_3_epochs_no_perturb", "results": []}]}
cria111/model_3_epochs_no_perturb
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "token-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T12:16:05+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
model\_3\_epochs\_no\_perturb ============================= This model is a fine-tuned version of distilbert/distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1620 * Precision: 0.2876 * Recall: 0.3063 * F1: 0.2967 * Accuracy: 0.9558 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.0+cpu * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.0+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.0+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ 63, 101, 5, 42 ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #token-classification #generated_from_trainer #base_model-distilbert/distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3### Training results### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.0+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mT5.testy.tedtalks.simple This model is a fine-tuned version of [samzirbo/mT5.pretrained.en-es.16K](https://huggingface.co/samzirbo/mT5.pretrained.en-es.16K) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: nan - eval_bleu: 0.0 - eval_meteor: 0.0 - eval_chrF++: 0.0 - eval_runtime: 87.7839 - eval_samples_per_second: 22.783 - eval_steps_per_second: 0.365 - epoch: 0.4905 - step: 4500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "base_model": "samzirbo/mT5.pretrained.en-es.16K", "model-index": [{"name": "mT5.testy.tedtalks.simple", "results": []}]}
samzirbo/mT5.testy.tedtalks.simple
null
[ "transformers", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:samzirbo/mT5.pretrained.en-es.16K", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T12:16:17+00:00
[]
[]
TAGS #transformers #safetensors #mt5 #text2text-generation #generated_from_trainer #base_model-samzirbo/mT5.pretrained.en-es.16K #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# URL This model is a fine-tuned version of samzirbo/URL-es.16K on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: nan - eval_bleu: 0.0 - eval_meteor: 0.0 - eval_chrF++: 0.0 - eval_runtime: 87.7839 - eval_samples_per_second: 22.783 - eval_steps_per_second: 0.365 - epoch: 0.4905 - step: 4500 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# URL\n\nThis model is a fine-tuned version of samzirbo/URL-es.16K on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: nan\n- eval_bleu: 0.0\n- eval_meteor: 0.0\n- eval_chrF++: 0.0\n- eval_runtime: 87.7839\n- eval_samples_per_second: 22.783\n- eval_steps_per_second: 0.365\n- epoch: 0.4905\n- step: 4500", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #mt5 #text2text-generation #generated_from_trainer #base_model-samzirbo/mT5.pretrained.en-es.16K #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# URL\n\nThis model is a fine-tuned version of samzirbo/URL-es.16K on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: nan\n- eval_bleu: 0.0\n- eval_meteor: 0.0\n- eval_chrF++: 0.0\n- eval_runtime: 87.7839\n- eval_samples_per_second: 22.783\n- eval_steps_per_second: 0.365\n- epoch: 0.4905\n- step: 4500", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ 65, 130, 7, 9, 9, 4, 117, 44 ]
[ "TAGS\n#transformers #safetensors #mt5 #text2text-generation #generated_from_trainer #base_model-samzirbo/mT5.pretrained.en-es.16K #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n# URL\n\nThis model is a fine-tuned version of samzirbo/URL-es.16K on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: nan\n- eval_bleu: 0.0\n- eval_meteor: 0.0\n- eval_chrF++: 0.0\n- eval_runtime: 87.7839\n- eval_samples_per_second: 22.783\n- eval_steps_per_second: 0.365\n- epoch: 0.4905\n- step: 4500## Model description\n\nMore information needed## Intended uses & limitations\n\nMore information needed## Training and evaluation data\n\nMore information needed## Training procedure### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0005\n- train_batch_size: 64\n- eval_batch_size: 64\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 1000\n- num_epochs: 3\n- mixed_precision_training: Native AMP### Framework versions\n\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
null
transformers
# Uploaded model - **Developed by:** ghaluh - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit"}
ghaluh/lora_SS_phi3_cefr_cep
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-01T12:16:31+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: ghaluh - License: apache-2.0 - Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: ghaluh\n- License: apache-2.0\n- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: ghaluh\n- License: apache-2.0\n- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ 68, 84 ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-unsloth/Phi-3-mini-4k-instruct-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n# Uploaded model\n\n- Developed by: ghaluh\n- License: apache-2.0\n- Finetuned from model : unsloth/Phi-3-mini-4k-instruct-bnb-4bit\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: DeMuenu/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
DeMuenu/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-05-01T12:17:29+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: DeMuenu/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: DeMuenu/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: DeMuenu/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ 35, 199 ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: DeMuenu/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
text-generation
transformers
> Update @ 2024.05.01: Pre-Release Llama-3-KoEn-8B model & [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview) ## Model Details **Llama-3-KoEn-8B** Llama-3-KoEn-8B model is continued pretrained language model based on Llama-3-8B. This model is trained with Korean+English corpus. The train was done on TPUv4-256, with the warm support from TRC program by Google. **Note for [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview)** With applying the idea from [Chat Vector paper](https://arxiv.org/abs/2310.04799), I released Instruction model named [Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview). Since it is NOT finetuned with any Korean instruction set(indeed `preview`), but it would be great starting point for creating new Chat/Instruct models. **Model developers** Junbum Lee (Beomi) **Variations** Llama-3-KoEn comes in one size — 8B. **Input** Models input text only. **Output** Models generate text and code only. **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. <table> <tr> <td> </td> <td><strong>Training Data</strong> </td> <td><strong>Params</strong> </td> <td><strong>Context length</strong> </td> <td><strong>GQA</strong> </td> <td><strong>Token count</strong> </td> <td><strong>Knowledge cutoff</strong> </td> </tr> <tr> <td rowspan="2" >Llama-3-KoEn </td> <td rowspan="2" >Same as *Llama-2-KoEn Dataset </td> <td>8B </td> <td>8k </td> <td>Yes </td> <td rowspan="2" >XXB+ </td> <td>Jun, 2023 </td> </tr> </table> **Model Release Date** Pre-release @ 2024.05.01 **Status** This is a static model trained on an offline dataset. **License** CC-By-NC-SA-4.0 + Llama3 License: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license) ## Intended Use **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. ## How to use TBD ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/). ## Ethical Considerations and Limitations The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide) ## Citation instructions **Llama-3-Open-Ko** ``` @article{llama3koen, title={Llama-3-KoEn}, author={L, Junbum}, year={2024}, url={https://huggingface.co/beomi/Llama-3-KoEn-8B} } ``` **Original Llama-3** ``` @article{llama3modelcard, title={Llama 3 Model Card}, author={AI@Meta}, year={2024}, url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md} } ```
{"language": ["en", "ko"], "license": "cc-by-nc-sa-4.0", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3", "llama-3-ko"], "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE"}
beomi/Llama-3-KoEn-8B-preview
null
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-3", "llama-3-ko", "conversational", "en", "ko", "arxiv:2310.04799", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-01T12:18:52+00:00
[ "2310.04799" ]
[ "en", "ko" ]
TAGS #transformers #pytorch #safetensors #llama #text-generation #facebook #meta #llama-3 #llama-3-ko #conversational #en #ko #arxiv-2310.04799 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
> > Update @ 2024.05.01: Pre-Release Llama-3-KoEn-8B model & Llama-3-KoEn-8B-Instruct-preview > > > Model Details ------------- Llama-3-KoEn-8B Llama-3-KoEn-8B model is continued pretrained language model based on Llama-3-8B. This model is trained with Korean+English corpus. The train was done on TPUv4-256, with the warm support from TRC program by Google. Note for Llama-3-KoEn-8B-Instruct-preview With applying the idea from Chat Vector paper, I released Instruction model named Llama-3-KoEn-8B-Instruct-preview. Since it is NOT finetuned with any Korean instruction set(indeed 'preview'), but it would be great starting point for creating new Chat/Instruct models. Model developers Junbum Lee (Beomi) Variations Llama-3-KoEn comes in one size — 8B. Input Models input text only. Output Models generate text and code only. Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. Model Release Date Pre-release @ 2024.05.01 Status This is a static model trained on an offline dataset. License CC-By-NC-SA-4.0 + Llama3 License: URL Intended Use ------------ Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English. Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy. How to use ---------- TBD ### Responsibility & Safety We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community. Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications. Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience. As part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started. #### Responsible release In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision. Misuse If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL Ethical Considerations and Limitations -------------------------------------- The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress. But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety. Please see the Responsible Use Guide available at URL instructions Llama-3-Open-Ko Original Llama-3
[ "### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.", "#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\nLlama-3-Open-Ko\n\n\nOriginal Llama-3" ]
[ "TAGS\n#transformers #pytorch #safetensors #llama #text-generation #facebook #meta #llama-3 #llama-3-ko #conversational #en #ko #arxiv-2310.04799 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.", "#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\nLlama-3-Open-Ko\n\n\nOriginal Llama-3" ]
[ 88, 270, 430 ]
[ "TAGS\n#transformers #pytorch #safetensors #llama #text-generation #facebook #meta #llama-3 #llama-3-ko #conversational #en #ko #arxiv-2310.04799 #license-cc-by-nc-sa-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n### Responsibility & Safety\n\n\nWe believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.\n\n\nFoundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.\n\n\nRather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.\n\n\nAs part of the Llama 3 release, we updated our Responsible Use Guide to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including Meta Llama Guard 2 and Code Shield safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a reference implementation to get you started.#### Responsible release\n\n\nIn addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.\n\n\nMisuse\n\n\nIf you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at URL\n\n\nEthical Considerations and Limitations\n--------------------------------------\n\n\nThe core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.\n\n\nBut Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating Purple Llama solutions into your workflows and specifically Llama Guard which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.\n\n\nPlease see the Responsible Use Guide available at URL\n\n\ninstructions\n\n\nLlama-3-Open-Ko\n\n\nOriginal Llama-3" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swinv2-tiny-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask This model is a fine-tuned version of [microsoft/swinv2-tiny-patch4-window8-256](https://huggingface.co/microsoft/swinv2-tiny-patch4-window8-256) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.3601 - Accuracy: 0.8565 - Recall: 0.8565 - F1: 0.8537 - Precision: 0.8631 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | F1 | Precision | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | No log | 0.9974 | 293 | 0.6645 | 0.7820 | 0.7820 | 0.7661 | 0.7678 | | No log | 1.9983 | 587 | 0.5493 | 0.8033 | 0.8033 | 0.7897 | 0.7964 | | No log | 2.9991 | 881 | 0.4242 | 0.8416 | 0.8416 | 0.8380 | 0.8460 | | No log | 4.0 | 1175 | 0.4124 | 0.8310 | 0.8310 | 0.8288 | 0.8299 | | No log | 4.9974 | 1468 | 0.3769 | 0.8412 | 0.8412 | 0.8388 | 0.8478 | | No log | 5.9983 | 1762 | 0.3589 | 0.8501 | 0.8501 | 0.8481 | 0.8582 | | No log | 6.9991 | 2056 | 0.3503 | 0.8455 | 0.8455 | 0.8456 | 0.8535 | | No log | 8.0 | 2350 | 0.3400 | 0.8404 | 0.8404 | 0.8416 | 0.8465 | | No log | 8.9974 | 2643 | 0.3533 | 0.8480 | 0.8480 | 0.8480 | 0.8501 | | 0.5214 | 9.9745 | 2930 | 0.3358 | 0.8459 | 0.8459 | 0.8460 | 0.8473 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.0a0+81ea7a4 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy", "recall", "f1", "precision"], "base_model": "microsoft/swinv2-tiny-patch4-window8-256", "model-index": [{"name": "swinv2-tiny-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.8565346956151554, "name": "Accuracy"}, {"type": "recall", "value": 0.8565346956151554, "name": "Recall"}, {"type": "f1", "value": 0.853731165851545, "name": "F1"}, {"type": "precision", "value": 0.8631033150629456, "name": "Precision"}]}]}]}
Kushagra07/swinv2-tiny-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask
null
[ "transformers", "tensorboard", "safetensors", "swinv2", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swinv2-tiny-patch4-window8-256", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-01T12:19:50+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #swinv2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swinv2-tiny-patch4-window8-256 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
swinv2-tiny-patch4-window8-256-finetuned-ind-17-imbalanced-aadhaarmask ====================================================================== This model is a fine-tuned version of microsoft/swinv2-tiny-patch4-window8-256 on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 0.3601 * Accuracy: 0.8565 * Recall: 0.8565 * F1: 0.8537 * Precision: 0.8631 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.40.1 * Pytorch 2.2.0a0+81ea7a4 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0a0+81ea7a4\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #swinv2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swinv2-tiny-patch4-window8-256 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0a0+81ea7a4\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ 79, 142, 5, 48 ]
[ "TAGS\n#transformers #tensorboard #safetensors #swinv2 #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swinv2-tiny-patch4-window8-256 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10### Training results### Framework versions\n\n\n* Transformers 4.40.1\n* Pytorch 2.2.0a0+81ea7a4\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]