pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
null
adapter-transformers
# Adapter `ltuzova/pretrain_tapt_unipelt_adpater_fix_train` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset](https://huggingface.co/datasets/BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset/) dataset and includes a prediction head for masked lm. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("ltuzova/pretrain_tapt_unipelt_adpater_fix_train", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
{"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset"]}
ltuzova/pretrain_tapt_unipelt_adpater_fix_train
null
[ "adapter-transformers", "roberta", "dataset:BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset", "region:us" ]
null
2024-04-20T07:54:05+00:00
[]
[]
TAGS #adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset #region-us
# Adapter 'ltuzova/pretrain_tapt_unipelt_adpater_fix_train' for roberta-base An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset dataset and includes a prediction head for masked lm. This adapter was created for usage with the Adapters library. ## Usage First, install 'adapters': Now, the adapter can be loaded and activated like this: ## Architecture & Training ## Evaluation results
[ "# Adapter 'ltuzova/pretrain_tapt_unipelt_adpater_fix_train' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ "TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset #region-us \n", "# Adapter 'ltuzova/pretrain_tapt_unipelt_adpater_fix_train' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_helpfulness_TAPT_pretraining_dataset dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
AJosh/G-22-2
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T07:55:48+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # financeLM_outputpath_sa This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4027 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.2041 | 1.0 | 239 | 1.7449 | | 1.5789 | 2.0 | 478 | 1.6901 | | 1.3553 | 3.0 | 717 | 1.7043 | | 1.1684 | 4.0 | 957 | 1.7492 | | 1.0153 | 5.0 | 1196 | 1.8371 | | 0.8759 | 6.0 | 1435 | 1.9329 | | 0.762 | 7.0 | 1674 | 2.0288 | | 0.6636 | 8.0 | 1914 | 2.1386 | | 0.5889 | 9.0 | 2153 | 2.2152 | | 0.5311 | 10.0 | 2392 | 2.2569 | | 0.4837 | 11.0 | 2631 | 2.3196 | | 0.4459 | 12.0 | 2871 | 2.3524 | | 0.419 | 13.0 | 3110 | 2.3839 | | 0.3996 | 14.0 | 3349 | 2.3953 | | 0.3834 | 14.98 | 3585 | 2.4027 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.5 - Tokenizers 0.14.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "openai-community/gpt2", "model-index": [{"name": "financeLM_outputpath_sa", "results": []}]}
Supersaiyan1729/financeLM_outputpath_sa
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T07:56:24+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-openai-community/gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
financeLM\_outputpath\_sa ========================= This model is a fine-tuned version of openai-community/gpt2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.4027 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.03 * num\_epochs: 15 ### Training results ### Framework versions * Transformers 4.35.0 * Pytorch 2.1.2+cu121 * Datasets 2.14.5 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 15", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.5\n* Tokenizers 0.14.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-openai-community/gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 15", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.5\n* Tokenizers 0.14.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
hi000000/insta_chai-llama-koen_80
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T08:00:12+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotions-dataset-wt This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.4135 - Accuracy: 0.8825 - F1: 0.8836 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 1.2102 | 1.0 | 125 | 0.6386 | 0.792 | 0.7790 | | 0.4984 | 2.0 | 250 | 0.4135 | 0.8825 | 0.8836 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-emotions-dataset-wt", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "config": "split", "split": "validation", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.8825, "name": "Accuracy"}, {"type": "f1", "value": 0.8835873403990151, "name": "F1"}]}]}]}
mayankkeshari/distilbert-base-uncased-finetuned-emotions-dataset-wt
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T08:00:16+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-emotions-dataset-wt ===================================================== This model is a fine-tuned version of distilbert-base-uncased on the emotion dataset. It achieves the following results on the evaluation set: * Loss: 0.4135 * Accuracy: 0.8825 * F1: 0.8836 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 128 * eval\_batch\_size: 128 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #dataset-emotion #base_model-distilbert-base-uncased #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 128\n* eval\\_batch\\_size: 128\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tapt_helpfulness_seq_bn_pretraining_model_full_train This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 21 - eval_batch_size: 21 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.746 | 1.0 | 1068 | 1.8309 | | 1.8701 | 2.0 | 2137 | 1.6877 | | 1.7711 | 3.0 | 3205 | 1.6275 | | 1.7178 | 4.0 | 4274 | 1.5909 | | 1.6876 | 5.0 | 5342 | 1.5788 | | 1.6638 | 6.0 | 6411 | 1.5636 | | 1.6526 | 7.0 | 7479 | 1.5344 | | 1.6357 | 8.0 | 8548 | 1.5402 | | 1.626 | 9.0 | 9616 | 1.5097 | | 1.6144 | 10.0 | 10685 | 1.5111 | | 1.611 | 11.0 | 11753 | 1.5248 | | 1.603 | 12.0 | 12822 | 1.4989 | | 1.6003 | 13.0 | 13890 | 1.5071 | | 1.5915 | 14.0 | 14959 | 1.4807 | | 1.5893 | 15.0 | 16027 | 1.4892 | | 1.5857 | 16.0 | 17096 | 1.4794 | | 1.5839 | 17.0 | 18164 | 1.4893 | | 1.5806 | 18.0 | 19233 | 1.4787 | | 1.5808 | 19.0 | 20301 | 1.4872 | | 1.5781 | 19.99 | 21360 | 1.4917 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "roberta-base", "model-index": [{"name": "tapt_helpfulness_seq_bn_pretraining_model_full_train", "results": []}]}
ltuzova/tapt_helpfulness_seq_bn_pretraining_model_full_train
null
[ "tensorboard", "generated_from_trainer", "base_model:roberta-base", "license:mit", "region:us" ]
null
2024-04-20T08:00:45+00:00
[]
[]
TAGS #tensorboard #generated_from_trainer #base_model-roberta-base #license-mit #region-us
tapt\_helpfulness\_seq\_bn\_pretraining\_model\_full\_train =========================================================== This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.4917 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 21 * eval\_batch\_size: 21 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 42 * optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 * lr\_scheduler\_type: linear * num\_epochs: 20 ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 21\n* eval\\_batch\\_size: 21\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#tensorboard #generated_from_trainer #base_model-roberta-base #license-mit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 21\n* eval\\_batch\\_size: 21\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
<img src=https://huggingface.co/lodrick-the-lafted/Copus-2x8B/resolve/main/copus.png> MoE'd up: - [dreamgen/opus-v1.2-llama-3-8b](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b) - [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
{"license": "llama2"}
blockblockblock/Copus-2x8B-bpw4.6
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T08:01:30+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src=URL MoE'd up: - dreamgen/opus-v1.2-llama-3-8b - NousResearch/Meta-Llama-3-8B-Instruct_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
[]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
hi000000/insta_chai-llama3_80
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T08:04:23+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert2bert-model6-last This model is a fine-tuned version of [](https://huggingface.co/) on the id_liputan6 dataset. It achieves the following results on the evaluation set: - Loss: 7.6361 - R1 Precision: 0.1719 - R1 Recall: 0.0279 - R1 Fmeasure: 0.0432 - R2 Precision: 0.0 - R2 Recall: 0.0 - R2 Fmeasure: 0.0 - Rl Precision: 0.1719 - Rl Recall: 0.0274 - Rl Fmeasure: 0.0428 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | R1 Precision | R1 Recall | R1 Fmeasure | R2 Precision | R2 Recall | R2 Fmeasure | Rl Precision | Rl Recall | Rl Fmeasure | |:-------------:|:-----:|:----:|:---------------:|:------------:|:---------:|:-----------:|:------------:|:---------:|:-----------:|:------------:|:---------:|:-----------:| | 10.0458 | 1.0 | 4 | 8.3786 | 0.0612 | 0.0567 | 0.0584 | 0.0 | 0.0 | 0.0 | 0.0512 | 0.0468 | 0.0485 | | 7.6302 | 2.0 | 8 | 8.0384 | 0.087 | 0.1202 | 0.1005 | 0.0 | 0.0 | 0.0 | 0.0583 | 0.08 | 0.0669 | | 7.2136 | 3.0 | 12 | 7.7980 | 0.0598 | 0.0775 | 0.0677 | 0.0057 | 0.0081 | 0.0067 | 0.0516 | 0.067 | 0.0583 | | 6.8639 | 4.0 | 16 | 7.8075 | 0.0938 | 0.0107 | 0.0192 | 0.0 | 0.0 | 0.0 | 0.0938 | 0.0105 | 0.0188 | | 6.3433 | 5.0 | 20 | 7.7948 | 0.0406 | 0.0107 | 0.0168 | 0.0 | 0.0 | 0.0 | 0.0406 | 0.0105 | 0.0166 | | 6.0891 | 6.0 | 24 | 7.7148 | 0.0469 | 0.0107 | 0.0162 | 0.0 | 0.0 | 0.0 | 0.0469 | 0.0105 | 0.015 | | 6.0284 | 7.0 | 28 | 7.6611 | 0.1406 | 0.0179 | 0.0256 | 0.0 | 0.0 | 0.0 | 0.1406 | 0.0139 | 0.0219 | | 5.7972 | 8.0 | 32 | 7.6732 | 0.0646 | 0.025 | 0.0332 | 0.0 | 0.0 | 0.0 | 0.0608 | 0.021 | 0.0293 | | 5.6802 | 9.0 | 36 | 7.6398 | 0.1823 | 0.0279 | 0.0443 | 0.0 | 0.0 | 0.0 | 0.1719 | 0.0241 | 0.0396 | | 5.4635 | 10.0 | 40 | 7.6361 | 0.1719 | 0.0279 | 0.0432 | 0.0 | 0.0 | 0.0 | 0.1719 | 0.0274 | 0.0428 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "datasets": ["id_liputan6"], "model-index": [{"name": "bert2bert-model6-last", "results": []}]}
Alfahluzi/bert2bert-model6-last
null
[ "transformers", "tensorboard", "safetensors", "encoder-decoder", "text2text-generation", "generated_from_trainer", "dataset:id_liputan6", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T08:05:34+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #encoder-decoder #text2text-generation #generated_from_trainer #dataset-id_liputan6 #autotrain_compatible #endpoints_compatible #region-us
bert2bert-model6-last ===================== This model is a fine-tuned version of [](URL on the id\_liputan6 dataset. It achieves the following results on the evaluation set: * Loss: 7.6361 * R1 Precision: 0.1719 * R1 Recall: 0.0279 * R1 Fmeasure: 0.0432 * R2 Precision: 0.0 * R2 Recall: 0.0 * R2 Fmeasure: 0.0 * Rl Precision: 0.1719 * Rl Recall: 0.0274 * Rl Fmeasure: 0.0428 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 10 * eval\_batch\_size: 10 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #encoder-decoder #text2text-generation #generated_from_trainer #dataset-id_liputan6 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 10\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Mistral-7B-v0.1-compliance-copilot-risk-fluent-thunder-30 This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3932 - Precision: 0.9506 - Recall: 0.9847 - F1-score: 0.9673 - Accuracy: 0.9382 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1-score | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:--------:|:--------:| | 0.5763 | 0.43 | 6000 | 0.7744 | 0.9300 | 0.9993 | 0.9634 | 0.9295 | | 0.3889 | 0.85 | 12000 | 0.4365 | 0.9402 | 0.9919 | 0.9653 | 0.9339 | | 0.7128 | 1.28 | 18000 | 0.4095 | 0.9426 | 0.9881 | 0.9648 | 0.9331 | | 0.381 | 1.71 | 24000 | 0.3868 | 0.9462 | 0.9826 | 0.9641 | 0.9319 | | 0.414 | 2.14 | 30000 | 0.3526 | 0.9489 | 0.9833 | 0.9658 | 0.9353 | | 0.5657 | 2.56 | 36000 | 0.3393 | 0.9519 | 0.9824 | 0.9669 | 0.9375 | | 0.2324 | 2.99 | 42000 | 0.4604 | 0.9426 | 0.9907 | 0.9660 | 0.9353 | | 0.4515 | 3.42 | 48000 | 0.4154 | 0.9495 | 0.9859 | 0.9673 | 0.9382 | | 0.636 | 3.84 | 54000 | 0.3445 | 0.9567 | 0.9745 | 0.9655 | 0.9353 | | 0.4072 | 4.27 | 60000 | 0.4142 | 0.9472 | 0.9871 | 0.9667 | 0.9369 | | 0.6797 | 4.7 | 66000 | 0.3932 | 0.9506 | 0.9847 | 0.9673 | 0.9382 | ### Framework versions - PEFT 0.8.1 - Transformers 4.37.2 - Pytorch 2.1.2+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "accuracy"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "Mistral-7B-v0.1-compliance-copilot-risk-fluent-thunder-30", "results": []}]}
ripjar/Mistral-7B-v0.1-compliance-copilot-risk-fluent-thunder-30
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:08:46+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
Mistral-7B-v0.1-compliance-copilot-risk-fluent-thunder-30 ========================================================= This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.3932 * Precision: 0.9506 * Recall: 0.9847 * F1-score: 0.9673 * Accuracy: 0.9382 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 4 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: constant * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.8.1 * Transformers 4.37.2 * Pytorch 2.1.2+cu121 * Datasets 2.16.1 * Tokenizers 0.15.0
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.8.1\n* Transformers 4.37.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 4\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: constant\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.8.1\n* Transformers 4.37.2\n* Pytorch 2.1.2+cu121\n* Datasets 2.16.1\n* Tokenizers 0.15.0" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper da-nst This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_17_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.8780 - Wer: 28.6353 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 11000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.0096 | 4.01 | 1000 | 0.7403 | 31.2960 | | 0.0046 | 9.0 | 2000 | 0.7646 | 29.8505 | | 0.0016 | 13.02 | 3000 | 0.7695 | 30.8398 | | 0.0009 | 18.01 | 4000 | 0.7821 | 31.2102 | | 0.0006 | 22.02 | 5000 | 0.8035 | 31.6303 | | 0.0011 | 27.01 | 6000 | 0.8169 | 29.6336 | | 0.0001 | 32.0 | 7000 | 0.8244 | 29.6246 | | 0.0 | 36.01 | 8000 | 0.8461 | 28.8205 | | 0.0 | 41.01 | 9000 | 0.8633 | 28.7754 | | 0.0 | 45.02 | 10000 | 0.8738 | 28.6986 | | 0.0 | 50.01 | 11000 | 0.8780 | 28.6353 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1
{"license": "apache-2.0", "tags": ["whisper-event", "generated_from_trainer"], "datasets": ["common_voice_17_0"], "metrics": ["wer"], "base_model": "openai/whisper-medium", "model-index": [{"name": "Whisper da-nst", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "common_voice_17_0", "type": "common_voice_17_0", "config": "da", "split": "test", "args": "da"}, "metrics": [{"type": "wer", "value": 28.635316438541807, "name": "Wer"}]}]}]}
nicolarsen/whisper-medium-3-F
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "whisper-event", "generated_from_trainer", "dataset:common_voice_17_0", "base_model:openai/whisper-medium", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-20T08:13:00+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #whisper-event #generated_from_trainer #dataset-common_voice_17_0 #base_model-openai/whisper-medium #license-apache-2.0 #model-index #endpoints_compatible #region-us
Whisper da-nst ============== This model is a fine-tuned version of openai/whisper-medium on the common\_voice\_17\_0 dataset. It achieves the following results on the evaluation set: * Loss: 0.8780 * Wer: 28.6353 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 11000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.37.2 * Pytorch 2.2.0+cu121 * Datasets 2.18.0 * Tokenizers 0.15.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 11000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #whisper-event #generated_from_trainer #dataset-common_voice_17_0 #base_model-openai/whisper-medium #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 11000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1" ]
image-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
LeoNight/custom-resnet50d-v3
null
[ "transformers", "safetensors", "resnet-t", "image-classification", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
null
2024-04-20T08:14:21+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #resnet-t #image-classification #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #resnet-t #image-classification #custom_code #arxiv-1910.09700 #autotrain_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0_ablation_declr_5iters5e7_iter_2 This model is a fine-tuned version of [ZhangShenao/0.0_ablation_declr_5iters5e7_iter_1](https://huggingface.co/ZhangShenao/0.0_ablation_declr_5iters5e7_iter_1) on the ZhangShenao/0.0_ablation_declr_5iters5e7_dataset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["ZhangShenao/0.0_ablation_declr_5iters5e7_dataset"], "base_model": "ZhangShenao/0.0_ablation_declr_5iters5e7_iter_1", "model-index": [{"name": "0.0_ablation_declr_5iters5e7_iter_2", "results": []}]}
ZhangShenao/0.0_ablation_declr_5iters5e7_iter_2
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:ZhangShenao/0.0_ablation_declr_5iters5e7_dataset", "base_model:ZhangShenao/0.0_ablation_declr_5iters5e7_iter_1", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T08:15:08+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-ZhangShenao/0.0_ablation_declr_5iters5e7_dataset #base_model-ZhangShenao/0.0_ablation_declr_5iters5e7_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.0_ablation_declr_5iters5e7_iter_2 This model is a fine-tuned version of ZhangShenao/0.0_ablation_declr_5iters5e7_iter_1 on the ZhangShenao/0.0_ablation_declr_5iters5e7_dataset dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.0_ablation_declr_5iters5e7_iter_2\n\nThis model is a fine-tuned version of ZhangShenao/0.0_ablation_declr_5iters5e7_iter_1 on the ZhangShenao/0.0_ablation_declr_5iters5e7_dataset dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 4e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-ZhangShenao/0.0_ablation_declr_5iters5e7_dataset #base_model-ZhangShenao/0.0_ablation_declr_5iters5e7_iter_1 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.0_ablation_declr_5iters5e7_iter_2\n\nThis model is a fine-tuned version of ZhangShenao/0.0_ablation_declr_5iters5e7_iter_1 on the ZhangShenao/0.0_ablation_declr_5iters5e7_dataset dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 4e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 128\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
text-generation
transformers
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Mlmns28rm6piPC2m4WZAk.jpeg) # "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
{"license": "other", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Nitral-AI/OpiumOrca-L3-8B", "Nitral-AI/Smaurpo-L3-8B"]}
Nitral-AI/Poppy_Porpoise-L3-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:Nitral-AI/OpiumOrca-L3-8B", "base_model:Nitral-AI/Smaurpo-L3-8B", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T08:18:00+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-Nitral-AI/OpiumOrca-L3-8B #base_model-Nitral-AI/Smaurpo-L3-8B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
!image/jpeg # "Poppy Porpoise" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences.
[ "# \"Poppy Porpoise\" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-Nitral-AI/OpiumOrca-L3-8B #base_model-Nitral-AI/Smaurpo-L3-8B #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# \"Poppy Porpoise\" is a cutting-edge AI roleplay assistant based on the Llama 3 8B model, specializing in crafting unforgettable narrative experiences. With its advanced language capabilities, Poppy expertly immerses users in an interactive and engaging adventure, tailoring each adventure to their individual preferences." ]
image-to-text
transformers
# LLaVA-JP Model Card ## Model detail **Model type:** LLaVA-JP is a vision-language model that can converse about input images.<br> This model was trained by fine-tuning [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) using [LLaVA](https://llava-vl.github.io/) method and [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) is used as Image Encoder. **Training:** This model was initially trained with the Vision Projector using LLaVA-Pretrain-JA.<br> In the second phase, it was fine-tuned with LLaVA-v1.5-Instruct-620K-JA. resources for more information: https://github.com/tosiyuki/LLaVA-JP/tree/main ## How to use the model **1. Download dependencies** ``` git clone https://github.com/tosiyuki/LLaVA-JP.git ``` **2. Inference** ```python import requests import torch import transformers from PIL import Image from transformers.generation.streamers import TextStreamer from llava.constants import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX from llava.conversation import conv_templates, SeparatorStyle from llava.model.llava_gpt2 import LlavaGpt2ForCausalLM from llava.train.arguments_dataclass import ModelArguments, DataArguments, TrainingArguments from llava.train.dataset import tokenizer_image_token if __name__ == "__main__": parser = transformers.HfArgumentParser( (ModelArguments, DataArguments, TrainingArguments)) model_args, data_args, training_args = parser.parse_args_into_dataclasses() model_path = 'toshi456/llava-jp-1.3b-v1.0-620k' device = "cuda" if torch.cuda.is_available() else "cpu" torch_dtype = torch.bfloat16 if device=="cuda" else torch.float32 model = LlavaGpt2ForCausalLM.from_pretrained( model_path, low_cpu_mem_usage=True, use_safetensors=True, torch_dtype=torch_dtype, device_map=device, ) tokenizer = transformers.AutoTokenizer.from_pretrained( model_path, model_max_length=1532, padding_side="right", use_fast=False, ) model.eval() conv_mode = "v1" conv = conv_templates[conv_mode].copy() # image pre-process image_url = "https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4/resolve/main/sample.jpg" image = Image.open(requests.get(image_url, stream=True).raw).convert('RGB') image_size = model.get_model().vision_tower.image_processor.size["height"] if model.get_model().vision_tower.scales is not None: image_size = model.get_model().vision_tower.image_processor.size["height"] * len(model.get_model().vision_tower.scales) if device == "cuda": image_tensor = model.get_model().vision_tower.image_processor( image, return_tensors='pt', size={"height": image_size, "width": image_size} )['pixel_values'].half().cuda().to(torch_dtype) else: image_tensor = model.get_model().vision_tower.image_processor( image, return_tensors='pt', size={"height": image_size, "width": image_size} )['pixel_values'].to(torch_dtype) # create prompt # ユーザー: <image>\n{prompt} prompt = "猫の隣には何がありますか?" inp = DEFAULT_IMAGE_TOKEN + '\n' + prompt conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token( prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt' ).unsqueeze(0) if device == "cuda": input_ids = input_ids.to(device) input_ids = input_ids[:, :-1] # </sep>がinputの最後に入るので削除する stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] streamer = TextStreamer(tokenizer, skip_prompt=True, timeout=20.0) # predict with torch.inference_mode(): model.generate( inputs=input_ids, images=image_tensor, do_sample=True, temperature=0.01, top_p=1.0, max_new_tokens=256, streamer=streamer, use_cache=True, ) """猫の隣にはノートパソコンがあります。""" ``` ## Training dataset **Stage1 Pretrain** - [LLaVA-Pretrain-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Pretrain-JA) **Stage2 Fine-tuning** - [LLaVA-v1.5-Instruct-620K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-v1.5-Instruct-620K-JA) ## Acknowledgement - [LLaVA](https://llava-vl.github.io/) - [LLM-jp](https://llm-jp.nii.ac.jp/) ## License cc-by-nc-4.0
{"language": ["ja"], "license": "cc-by-nc-4.0", "tags": ["vision", "image-captioning", "VQA"], "datasets": ["turing-motors/LLaVA-Pretrain-JA", "turing-motors/LLaVA-v1.5-Instruct-620K-JA"], "pipeline_tag": "image-to-text"}
toshi456/llava-jp-1.3b-v1.0-620k
null
[ "transformers", "safetensors", "llava-jp", "text-generation", "vision", "image-captioning", "VQA", "image-to-text", "ja", "dataset:turing-motors/LLaVA-Pretrain-JA", "dataset:turing-motors/LLaVA-v1.5-Instruct-620K-JA", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T08:21:04+00:00
[]
[ "ja" ]
TAGS #transformers #safetensors #llava-jp #text-generation #vision #image-captioning #VQA #image-to-text #ja #dataset-turing-motors/LLaVA-Pretrain-JA #dataset-turing-motors/LLaVA-v1.5-Instruct-620K-JA #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
# LLaVA-JP Model Card ## Model detail Model type: LLaVA-JP is a vision-language model that can converse about input images.<br> This model was trained by fine-tuning llm-jp/llm-jp-1.3b-v1.0 using LLaVA method and google/siglip-so400m-patch14-384 is used as Image Encoder. Training: This model was initially trained with the Vision Projector using LLaVA-Pretrain-JA.<br> In the second phase, it was fine-tuned with LLaVA-v1.5-Instruct-620K-JA. resources for more information: URL ## How to use the model 1. Download dependencies 2. Inference ## Training dataset Stage1 Pretrain - LLaVA-Pretrain-JA Stage2 Fine-tuning - LLaVA-v1.5-Instruct-620K-JA ## Acknowledgement - LLaVA - LLM-jp ## License cc-by-nc-4.0
[ "# LLaVA-JP Model Card", "## Model detail\n\nModel type:\n\nLLaVA-JP is a vision-language model that can converse about input images.<br>\nThis model was trained by fine-tuning llm-jp/llm-jp-1.3b-v1.0 using LLaVA method and google/siglip-so400m-patch14-384 is used as Image Encoder.\n\nTraining:\n\nThis model was initially trained with the Vision Projector using LLaVA-Pretrain-JA.<br>\nIn the second phase, it was fine-tuned with LLaVA-v1.5-Instruct-620K-JA.\n\nresources for more information: URL", "## How to use the model\n1. Download dependencies\n\n\n2. Inference", "## Training dataset\nStage1 Pretrain\n- LLaVA-Pretrain-JA\n\nStage2 Fine-tuning\n- LLaVA-v1.5-Instruct-620K-JA", "## Acknowledgement\n- LLaVA\n- LLM-jp", "## License\ncc-by-nc-4.0" ]
[ "TAGS\n#transformers #safetensors #llava-jp #text-generation #vision #image-captioning #VQA #image-to-text #ja #dataset-turing-motors/LLaVA-Pretrain-JA #dataset-turing-motors/LLaVA-v1.5-Instruct-620K-JA #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# LLaVA-JP Model Card", "## Model detail\n\nModel type:\n\nLLaVA-JP is a vision-language model that can converse about input images.<br>\nThis model was trained by fine-tuning llm-jp/llm-jp-1.3b-v1.0 using LLaVA method and google/siglip-so400m-patch14-384 is used as Image Encoder.\n\nTraining:\n\nThis model was initially trained with the Vision Projector using LLaVA-Pretrain-JA.<br>\nIn the second phase, it was fine-tuned with LLaVA-v1.5-Instruct-620K-JA.\n\nresources for more information: URL", "## How to use the model\n1. Download dependencies\n\n\n2. Inference", "## Training dataset\nStage1 Pretrain\n- LLaVA-Pretrain-JA\n\nStage2 Fine-tuning\n- LLaVA-v1.5-Instruct-620K-JA", "## Acknowledgement\n- LLaVA\n- LLM-jp", "## License\ncc-by-nc-4.0" ]
token-classification
gliner
# Model Card for GLiNER PII GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoder (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios. This model has been trained by fine-tuning `urchade/gliner_multi-v2.1` on the `urchade/synthetic-pii-ner-mistral-v1` dataset. This model is capable of recognizing various types of *personally identifiable information* (PII), including but not limited to these entity types: `person`, `organization`, `phone number`, `address`, `passport number`, `email`, `credit card number`, `social security number`, `health insurance id number`, `date of birth`, `mobile phone number`, `bank account number`, `medication`, `cpf`, `driver's license number`, `tax identification number`, `medical condition`, `identity card number`, `national id number`, `ip address`, `email address`, `iban`, `credit card expiration date`, `username`, `health insurance number`, `registration number`, `student id number`, `insurance number`, `flight number`, `landline phone number`, `blood type`, `cvv`, `reservation number`, `digital signature`, `social media handle`, `license plate number`, `cnpj`, `postal code`, `passport_number`, `serial number`, `vehicle registration number`, `credit card brand`, `fax number`, `visa number`, `insurance company`, `identity document number`, `transaction number`, `national health insurance number`, `cvc`, `birth certificate number`, `train ticket number`, `passport expiration date`, and `social_security_number`. ## Links * Paper: https://arxiv.org/abs/2311.08526 * Repository: https://github.com/urchade/GLiNER ```python from gliner import GLiNER model = GLiNER.from_pretrained("urchade/gliner_multi_pii-v1") text = """ Harilala Rasoanaivo, un homme d'affaires local d'Antananarivo, a enregistré une nouvelle société nommée "Rasoanaivo Enterprises" au Lot II M 92 Antohomadinika. Son numéro est le +261 32 22 345 67, et son adresse électronique est [email protected]. Il a fourni son numéro de sécu 501-02-1234 pour l'enregistrement. """ labels = ["work", "booking number", "personally identifiable information", "driver licence", "person", "book", "full address", "company", "actor", "character", "email", "passport number", "Social Security Number", "phone number"] entities = model.predict_entities(text, labels) for entity in entities: print(entity["text"], "=>", entity["label"]) ``` ``` Harilala Rasoanaivo => person Rasoanaivo Enterprises => company Lot II M 92 Antohomadinika => full address +261 32 22 345 67 => phone number [email protected] => email 501-02-1234 => Social Security Number ```
{"language": ["en", "fr", "de", "es", "pt", "it"], "license": "apache-2.0", "library_name": "gliner", "datasets": ["urchade/synthetic-pii-ner-mistral-v1"], "pipeline_tag": "token-classification"}
urchade/gliner_multi_pii-v1
null
[ "gliner", "pytorch", "token-classification", "en", "fr", "de", "es", "pt", "it", "dataset:urchade/synthetic-pii-ner-mistral-v1", "arxiv:2311.08526", "license:apache-2.0", "has_space", "region:us" ]
null
2024-04-20T08:21:07+00:00
[ "2311.08526" ]
[ "en", "fr", "de", "es", "pt", "it" ]
TAGS #gliner #pytorch #token-classification #en #fr #de #es #pt #it #dataset-urchade/synthetic-pii-ner-mistral-v1 #arxiv-2311.08526 #license-apache-2.0 #has_space #region-us
# Model Card for GLiNER PII GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoder (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios. This model has been trained by fine-tuning 'urchade/gliner_multi-v2.1' on the 'urchade/synthetic-pii-ner-mistral-v1' dataset. This model is capable of recognizing various types of *personally identifiable information* (PII), including but not limited to these entity types: 'person', 'organization', 'phone number', 'address', 'passport number', 'email', 'credit card number', 'social security number', 'health insurance id number', 'date of birth', 'mobile phone number', 'bank account number', 'medication', 'cpf', 'driver's license number', 'tax identification number', 'medical condition', 'identity card number', 'national id number', 'ip address', 'email address', 'iban', 'credit card expiration date', 'username', 'health insurance number', 'registration number', 'student id number', 'insurance number', 'flight number', 'landline phone number', 'blood type', 'cvv', 'reservation number', 'digital signature', 'social media handle', 'license plate number', 'cnpj', 'postal code', 'passport_number', 'serial number', 'vehicle registration number', 'credit card brand', 'fax number', 'visa number', 'insurance company', 'identity document number', 'transaction number', 'national health insurance number', 'cvc', 'birth certificate number', 'train ticket number', 'passport expiration date', and 'social_security_number'. ## Links * Paper: URL * Repository: URL
[ "# Model Card for GLiNER PII\n\nGLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoder (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.\n\nThis model has been trained by fine-tuning 'urchade/gliner_multi-v2.1' on the 'urchade/synthetic-pii-ner-mistral-v1' dataset.\n\nThis model is capable of recognizing various types of *personally identifiable information* (PII), including but not limited to these entity types: 'person', 'organization', 'phone number', 'address', 'passport number', 'email', 'credit card number', 'social security number', 'health insurance id number', 'date of birth', 'mobile phone number', 'bank account number', 'medication', 'cpf', 'driver's license number', 'tax identification number', 'medical condition', 'identity card number', 'national id number', 'ip address', 'email address', 'iban', 'credit card expiration date', 'username', 'health insurance number', 'registration number', 'student id number', 'insurance number', 'flight number', 'landline phone number', 'blood type', 'cvv', 'reservation number', 'digital signature', 'social media handle', 'license plate number', 'cnpj', 'postal code', 'passport_number', 'serial number', 'vehicle registration number', 'credit card brand', 'fax number', 'visa number', 'insurance company', 'identity document number', 'transaction number', 'national health insurance number', 'cvc', 'birth certificate number', 'train ticket number', 'passport expiration date', and 'social_security_number'.", "## Links\n\n* Paper: URL\n* Repository: URL" ]
[ "TAGS\n#gliner #pytorch #token-classification #en #fr #de #es #pt #it #dataset-urchade/synthetic-pii-ner-mistral-v1 #arxiv-2311.08526 #license-apache-2.0 #has_space #region-us \n", "# Model Card for GLiNER PII\n\nGLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoder (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.\n\nThis model has been trained by fine-tuning 'urchade/gliner_multi-v2.1' on the 'urchade/synthetic-pii-ner-mistral-v1' dataset.\n\nThis model is capable of recognizing various types of *personally identifiable information* (PII), including but not limited to these entity types: 'person', 'organization', 'phone number', 'address', 'passport number', 'email', 'credit card number', 'social security number', 'health insurance id number', 'date of birth', 'mobile phone number', 'bank account number', 'medication', 'cpf', 'driver's license number', 'tax identification number', 'medical condition', 'identity card number', 'national id number', 'ip address', 'email address', 'iban', 'credit card expiration date', 'username', 'health insurance number', 'registration number', 'student id number', 'insurance number', 'flight number', 'landline phone number', 'blood type', 'cvv', 'reservation number', 'digital signature', 'social media handle', 'license plate number', 'cnpj', 'postal code', 'passport_number', 'serial number', 'vehicle registration number', 'credit card brand', 'fax number', 'visa number', 'insurance company', 'identity document number', 'transaction number', 'national health insurance number', 'cvc', 'birth certificate number', 'train ticket number', 'passport expiration date', and 'social_security_number'.", "## Links\n\n* Paper: URL\n* Repository: URL" ]
text-generation
transformers
I am now basing all future releases of the MFANN experiment using llama-3 as a base model, I may continue fine-tuning mistral-7b every other release this model uses meta's llama-3 as its base, and benchmarks are pending ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435f27b2d0ed796668ffd8b/VlqyDezfgqoujwIdiNfYB.png) changed the model name to MFANNV0.6 due to a failed benchmark and the need to resubmit edit: due to continuous benchmark fails I am renaming the model back to MFANNver0.6, the 3b model is also failing benchmarks for some reason despite the fact both models run fine on my machine :(
{"license": "apache-2.0", "library_name": "transformers", "datasets": ["netcat420/MFANN"]}
netcat420/MFANNv0.6
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "dataset:netcat420/MFANN", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T08:22:13+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #dataset-netcat420/MFANN #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
I am now basing all future releases of the MFANN experiment using llama-3 as a base model, I may continue fine-tuning mistral-7b every other release this model uses meta's llama-3 as its base, and benchmarks are pending !image/png changed the model name to MFANNV0.6 due to a failed benchmark and the need to resubmit edit: due to continuous benchmark fails I am renaming the model back to MFANNver0.6, the 3b model is also failing benchmarks for some reason despite the fact both models run fine on my machine :(
[]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #dataset-netcat420/MFANN #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
null
null
ما هو Odiflex حبوب؟ يمثل Odiflex أجهزة لوحية حلاً سمعيًا متطورًا يستخدم التكنولوجيا المتقدمة لتعزيز التجارب السمعية للأفراد الذين يعانون من فقدان السمع. على عكس المعينات السمعية التقليدية، تأتي Odiflex مقابل على شكل كبسولة صغيرة، مما يجعلها سرية ومريحة للاستخدام اليومي. يسخر هذا الجهاز المبتكر قوة الذكاء الاصطناعي وخوارزميات التعلم الآلي للتكيف مع ملف السمع الفريد للمستخدم، مما يوفر تضخيمًا صوتيًا مخصصًا مصممًا خصيصًا لتلبية احتياجاته الخاصة. الموقع الرسمي:<a href="https://www.nutritionsee.com/Odifmir">www.Odiflex.com</a> <p><a href="https://www.nutritionsee.com/Odifmir"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/04/Odiflex-Morocco.png" alt="enter image description here"> </a></p> <a href="https://www.nutritionsee.com/Odifmir">اشتري الآن!! انقر على الرابط أدناه لمزيد من المعلومات واحصل على خصم 50% الآن... أسرع</a> الموقع الرسمي:<a href="https://www.nutritionsee.com/Odifmir">www.Odiflex.com</a>
{}
OdiflexMorocco/OdiflexMorocco
null
[ "region:us" ]
null
2024-04-20T08:22:36+00:00
[]
[]
TAGS #region-us
ما هو Odiflex حبوب؟ يمثل Odiflex أجهزة لوحية حلاً سمعيًا متطورًا يستخدم التكنولوجيا المتقدمة لتعزيز التجارب السمعية للأفراد الذين يعانون من فقدان السمع. على عكس المعينات السمعية التقليدية، تأتي Odiflex مقابل على شكل كبسولة صغيرة، مما يجعلها سرية ومريحة للاستخدام اليومي. يسخر هذا الجهاز المبتكر قوة الذكاء الاصطناعي وخوارزميات التعلم الآلي للتكيف مع ملف السمع الفريد للمستخدم، مما يوفر تضخيمًا صوتيًا مخصصًا مصممًا خصيصًا لتلبية احتياجاته الخاصة. الموقع الرسمي:<a href="URL <p><a href="URL <img src="URL alt="enter image description here"> </a></p> <a href="URL>اشتري الآن!! انقر على الرابط أدناه لمزيد من المعلومات واحصل على خصم 50% الآن... أسرع</a> الموقع الرسمي:<a href="URL
[]
[ "TAGS\n#region-us \n" ]
null
null
# T3qMulti_verse_model-7B T3qMulti_verse_model-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [chihoonlee10/T3Q-Mistral-Orca-Math-DPO](https://huggingface.co/chihoonlee10/T3Q-Mistral-Orca-Math-DPO) * [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model) ## 🧩 Configuration ```yaml slices: - sources: - model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO layer_range: [0, 32] - model: MTSAIR/multi_verse_model layer_range: [0, 32] merge_method: slerp base_model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/T3qMulti_verse_model-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"], "base_model": ["chihoonlee10/T3Q-Mistral-Orca-Math-DPO", "MTSAIR/multi_verse_model"]}
automerger/T3qMulti_verse_model-7B
null
[ "merge", "mergekit", "lazymergekit", "automerger", "base_model:chihoonlee10/T3Q-Mistral-Orca-Math-DPO", "base_model:MTSAIR/multi_verse_model", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:22:36+00:00
[]
[]
TAGS #merge #mergekit #lazymergekit #automerger #base_model-chihoonlee10/T3Q-Mistral-Orca-Math-DPO #base_model-MTSAIR/multi_verse_model #license-apache-2.0 #region-us
# T3qMulti_verse_model-7B T3qMulti_verse_model-7B is an automated merge created by Maxime Labonne using the following configuration. * chihoonlee10/T3Q-Mistral-Orca-Math-DPO * MTSAIR/multi_verse_model ## Configuration ## Usage
[ "# T3qMulti_verse_model-7B\n\nT3qMulti_verse_model-7B is an automated merge created by Maxime Labonne using the following configuration.\n* chihoonlee10/T3Q-Mistral-Orca-Math-DPO\n* MTSAIR/multi_verse_model", "## Configuration", "## Usage" ]
[ "TAGS\n#merge #mergekit #lazymergekit #automerger #base_model-chihoonlee10/T3Q-Mistral-Orca-Math-DPO #base_model-MTSAIR/multi_verse_model #license-apache-2.0 #region-us \n", "# T3qMulti_verse_model-7B\n\nT3qMulti_verse_model-7B is an automated merge created by Maxime Labonne using the following configuration.\n* chihoonlee10/T3Q-Mistral-Orca-Math-DPO\n* MTSAIR/multi_verse_model", "## Configuration", "## Usage" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
Vaishnav267/llama-3-8B-ft-alpaca
null
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T08:25:31+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HovhAbg/text2sql
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T08:25:59+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<img src=https://huggingface.co/lodrick-the-lafted/Copus-2x8B/resolve/main/copus.png> MoE'd up: - [dreamgen/opus-v1.2-llama-3-8b](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b) - [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
{"license": "llama2"}
blockblockblock/Copus-2x8B-bpw4.8
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T08:26:16+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src=URL MoE'd up: - dreamgen/opus-v1.2-llama-3-8b - NousResearch/Meta-Llama-3-8B-Instruct_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
[]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text-generation
null
# DavidAU/TinyLlama-1.1B-2.5T-chat-Q8_0-GGUF This model was converted to GGUF format from [`AIGym/TinyLlama-1.1B-2.5T-chat`](https://huggingface.co/AIGym/TinyLlama-1.1B-2.5T-chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/AIGym/TinyLlama-1.1B-2.5T-chat) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyLlama-1.1B-2.5T-chat-Q8_0-GGUF --model tinyllama-1.1b-2.5t-chat.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyLlama-1.1B-2.5T-chat-Q8_0-GGUF --model tinyllama-1.1b-2.5t-chat.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-2.5t-chat.Q8_0.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["finetuned", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation", "model-index": [{"name": "TinyLlama-1.1B-2.5T-chat", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 34.47, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 59.71, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 26.45, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 38.8}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 61.01, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 1.14, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/TinyLlama-1.1B-2.5T-chat-Q8_0-GGUF
null
[ "gguf", "finetuned", "llama-cpp", "gguf-my-repo", "text-generation", "license:apache-2.0", "model-index", "region:us" ]
null
2024-04-20T08:28:40+00:00
[]
[]
TAGS #gguf #finetuned #llama-cpp #gguf-my-repo #text-generation #license-apache-2.0 #model-index #region-us
# DavidAU/TinyLlama-1.1B-2.5T-chat-Q8_0-GGUF This model was converted to GGUF format from 'AIGym/TinyLlama-1.1B-2.5T-chat' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyLlama-1.1B-2.5T-chat-Q8_0-GGUF\nThis model was converted to GGUF format from 'AIGym/TinyLlama-1.1B-2.5T-chat' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #finetuned #llama-cpp #gguf-my-repo #text-generation #license-apache-2.0 #model-index #region-us \n", "# DavidAU/TinyLlama-1.1B-2.5T-chat-Q8_0-GGUF\nThis model was converted to GGUF format from 'AIGym/TinyLlama-1.1B-2.5T-chat' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
null
# DavidAU/TinyLlama-1.1B-2.5T-chat-and-function-calling-Q8_0-GGUF This model was converted to GGUF format from [`AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling`](https://huggingface.co/AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyLlama-1.1B-2.5T-chat-and-function-calling-Q8_0-GGUF --model tinyllama-1.1b-2.5t-chat-and-function-calling.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyLlama-1.1B-2.5T-chat-and-function-calling-Q8_0-GGUF --model tinyllama-1.1b-2.5t-chat-and-function-calling.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-2.5t-chat-and-function-calling.Q8_0.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["finetuned", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation", "model-index": [{"name": "TinyLlama-1.1B-2.5T-chat-and-function-calling", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 34.39, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 59.61, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 26.32, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 38.92}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 61.96, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 1.74, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/TinyLlama-1.1B-2.5T-chat-and-function-calling-Q8_0-GGUF
null
[ "gguf", "finetuned", "llama-cpp", "gguf-my-repo", "text-generation", "license:apache-2.0", "model-index", "region:us" ]
null
2024-04-20T08:28:54+00:00
[]
[]
TAGS #gguf #finetuned #llama-cpp #gguf-my-repo #text-generation #license-apache-2.0 #model-index #region-us
# DavidAU/TinyLlama-1.1B-2.5T-chat-and-function-calling-Q8_0-GGUF This model was converted to GGUF format from 'AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyLlama-1.1B-2.5T-chat-and-function-calling-Q8_0-GGUF\nThis model was converted to GGUF format from 'AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #finetuned #llama-cpp #gguf-my-repo #text-generation #license-apache-2.0 #model-index #region-us \n", "# DavidAU/TinyLlama-1.1B-2.5T-chat-and-function-calling-Q8_0-GGUF\nThis model was converted to GGUF format from 'AIGym/TinyLlama-1.1B-2.5T-chat-and-function-calling' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/karasu-1.1B-slerp_reverse-Q8_0-GGUF This model was converted to GGUF format from [`aipib/karasu-1.1B-slerp_reverse`](https://huggingface.co/aipib/karasu-1.1B-slerp_reverse) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/aipib/karasu-1.1B-slerp_reverse) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/karasu-1.1B-slerp_reverse-Q8_0-GGUF --model karasu-1.1b-slerp_reverse.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/karasu-1.1B-slerp_reverse-Q8_0-GGUF --model karasu-1.1b-slerp_reverse.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m karasu-1.1b-slerp_reverse.Q8_0.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "lightblue/karasu-1.1B", "niryuu/Karasu-1.1b-chat-vector", "llama-cpp", "gguf-my-repo"], "base_model": ["lightblue/karasu-1.1B", "niryuu/Karasu-1.1b-chat-vector"]}
DavidAU/karasu-1.1B-slerp_reverse-Q8_0-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "lightblue/karasu-1.1B", "niryuu/Karasu-1.1b-chat-vector", "llama-cpp", "gguf-my-repo", "base_model:lightblue/karasu-1.1B", "base_model:niryuu/Karasu-1.1b-chat-vector", "region:us" ]
null
2024-04-20T08:29:11+00:00
[]
[]
TAGS #gguf #merge #mergekit #lazymergekit #lightblue/karasu-1.1B #niryuu/Karasu-1.1b-chat-vector #llama-cpp #gguf-my-repo #base_model-lightblue/karasu-1.1B #base_model-niryuu/Karasu-1.1b-chat-vector #region-us
# DavidAU/karasu-1.1B-slerp_reverse-Q8_0-GGUF This model was converted to GGUF format from 'aipib/karasu-1.1B-slerp_reverse' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/karasu-1.1B-slerp_reverse-Q8_0-GGUF\nThis model was converted to GGUF format from 'aipib/karasu-1.1B-slerp_reverse' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #merge #mergekit #lazymergekit #lightblue/karasu-1.1B #niryuu/Karasu-1.1b-chat-vector #llama-cpp #gguf-my-repo #base_model-lightblue/karasu-1.1B #base_model-niryuu/Karasu-1.1b-chat-vector #region-us \n", "# DavidAU/karasu-1.1B-slerp_reverse-Q8_0-GGUF\nThis model was converted to GGUF format from 'aipib/karasu-1.1B-slerp_reverse' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/karasu-1.1B-slerpx2-Q8_0-GGUF This model was converted to GGUF format from [`aipib/karasu-1.1B-slerpx2`](https://huggingface.co/aipib/karasu-1.1B-slerpx2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/aipib/karasu-1.1B-slerpx2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/karasu-1.1B-slerpx2-Q8_0-GGUF --model karasu-1.1b-slerpx2.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/karasu-1.1B-slerpx2-Q8_0-GGUF --model karasu-1.1b-slerpx2.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m karasu-1.1b-slerpx2.Q8_0.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "lightblue/karasu-1.1B", "aipib/karasu-1.1B-slerp_reverse", "llama-cpp", "gguf-my-repo"], "base_model": ["lightblue/karasu-1.1B", "aipib/karasu-1.1B-slerp_reverse"]}
DavidAU/karasu-1.1B-slerpx2-Q8_0-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "lightblue/karasu-1.1B", "aipib/karasu-1.1B-slerp_reverse", "llama-cpp", "gguf-my-repo", "base_model:lightblue/karasu-1.1B", "base_model:aipib/karasu-1.1B-slerp_reverse", "region:us" ]
null
2024-04-20T08:30:41+00:00
[]
[]
TAGS #gguf #merge #mergekit #lazymergekit #lightblue/karasu-1.1B #aipib/karasu-1.1B-slerp_reverse #llama-cpp #gguf-my-repo #base_model-lightblue/karasu-1.1B #base_model-aipib/karasu-1.1B-slerp_reverse #region-us
# DavidAU/karasu-1.1B-slerpx2-Q8_0-GGUF This model was converted to GGUF format from 'aipib/karasu-1.1B-slerpx2' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/karasu-1.1B-slerpx2-Q8_0-GGUF\nThis model was converted to GGUF format from 'aipib/karasu-1.1B-slerpx2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #merge #mergekit #lazymergekit #lightblue/karasu-1.1B #aipib/karasu-1.1B-slerp_reverse #llama-cpp #gguf-my-repo #base_model-lightblue/karasu-1.1B #base_model-aipib/karasu-1.1B-slerp_reverse #region-us \n", "# DavidAU/karasu-1.1B-slerpx2-Q8_0-GGUF\nThis model was converted to GGUF format from 'aipib/karasu-1.1B-slerpx2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/karasu-1.1B-slerpx7-Q8_0-GGUF This model was converted to GGUF format from [`aipib/karasu-1.1B-slerpx7`](https://huggingface.co/aipib/karasu-1.1B-slerpx7) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/aipib/karasu-1.1B-slerpx7) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/karasu-1.1B-slerpx7-Q8_0-GGUF --model karasu-1.1b-slerpx7.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/karasu-1.1B-slerpx7-Q8_0-GGUF --model karasu-1.1b-slerpx7.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m karasu-1.1b-slerpx7.Q8_0.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "aipib/karasu-1.1B-slerpx6", "llama-cpp", "gguf-my-repo"], "base_model": ["aipib/karasu-1.1B-slerpx6", "aipib/karasu-1.1B-slerpx6"]}
DavidAU/karasu-1.1B-slerpx7-Q8_0-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "aipib/karasu-1.1B-slerpx6", "llama-cpp", "gguf-my-repo", "base_model:aipib/karasu-1.1B-slerpx6", "region:us" ]
null
2024-04-20T08:30:56+00:00
[]
[]
TAGS #gguf #merge #mergekit #lazymergekit #aipib/karasu-1.1B-slerpx6 #llama-cpp #gguf-my-repo #base_model-aipib/karasu-1.1B-slerpx6 #region-us
# DavidAU/karasu-1.1B-slerpx7-Q8_0-GGUF This model was converted to GGUF format from 'aipib/karasu-1.1B-slerpx7' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/karasu-1.1B-slerpx7-Q8_0-GGUF\nThis model was converted to GGUF format from 'aipib/karasu-1.1B-slerpx7' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #merge #mergekit #lazymergekit #aipib/karasu-1.1B-slerpx6 #llama-cpp #gguf-my-repo #base_model-aipib/karasu-1.1B-slerpx6 #region-us \n", "# DavidAU/karasu-1.1B-slerpx7-Q8_0-GGUF\nThis model was converted to GGUF format from 'aipib/karasu-1.1B-slerpx7' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/TinyLlama-1.1B-Instruct-3T_slerp-Q8_0-GGUF This model was converted to GGUF format from [`aipib/TinyLlama-1.1B-Instruct-3T_slerp`](https://huggingface.co/aipib/TinyLlama-1.1B-Instruct-3T_slerp) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/aipib/TinyLlama-1.1B-Instruct-3T_slerp) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyLlama-1.1B-Instruct-3T_slerp-Q8_0-GGUF --model tinyllama-1.1b-instruct-3t_slerp.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyLlama-1.1B-Instruct-3T_slerp-Q8_0-GGUF --model tinyllama-1.1b-instruct-3t_slerp.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-instruct-3t_slerp.Q8_0.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "gardner/TinyLlama-1.1B-Instruct-3T", "llama-cpp", "gguf-my-repo"], "base_model": ["gardner/TinyLlama-1.1B-Instruct-3T", "gardner/TinyLlama-1.1B-Instruct-3T"]}
DavidAU/TinyLlama-1.1B-Instruct-3T_slerp-Q8_0-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "gardner/TinyLlama-1.1B-Instruct-3T", "llama-cpp", "gguf-my-repo", "base_model:gardner/TinyLlama-1.1B-Instruct-3T", "region:us" ]
null
2024-04-20T08:31:13+00:00
[]
[]
TAGS #gguf #merge #mergekit #lazymergekit #gardner/TinyLlama-1.1B-Instruct-3T #llama-cpp #gguf-my-repo #base_model-gardner/TinyLlama-1.1B-Instruct-3T #region-us
# DavidAU/TinyLlama-1.1B-Instruct-3T_slerp-Q8_0-GGUF This model was converted to GGUF format from 'aipib/TinyLlama-1.1B-Instruct-3T_slerp' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyLlama-1.1B-Instruct-3T_slerp-Q8_0-GGUF\nThis model was converted to GGUF format from 'aipib/TinyLlama-1.1B-Instruct-3T_slerp' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #merge #mergekit #lazymergekit #gardner/TinyLlama-1.1B-Instruct-3T #llama-cpp #gguf-my-repo #base_model-gardner/TinyLlama-1.1B-Instruct-3T #region-us \n", "# DavidAU/TinyLlama-1.1B-Instruct-3T_slerp-Q8_0-GGUF\nThis model was converted to GGUF format from 'aipib/TinyLlama-1.1B-Instruct-3T_slerp' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-Q8_0-GGUF This model was converted to GGUF format from [`alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2`](https://huggingface.co/alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-Q8_0-GGUF --model tinyllama-1.1b-chat-v1.0-reasoning-v2.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-Q8_0-GGUF --model tinyllama-1.1b-chat-v1.0-reasoning-v2.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-chat-v1.0-reasoning-v2.Q8_0.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["trl", "sft", "generated_from_trainer", "llama-cpp", "gguf-my-repo"], "datasets": ["generator"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0", "model-index": [{"name": "TinyLlama-1.1B-Chat-v1.0-reasoning-v2", "results": []}]}
DavidAU/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-Q8_0-GGUF
null
[ "gguf", "trl", "sft", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "dataset:generator", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:31:50+00:00
[]
[]
TAGS #gguf #trl #sft #generated_from_trainer #llama-cpp #gguf-my-repo #dataset-generator #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us
# DavidAU/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-Q8_0-GGUF This model was converted to GGUF format from 'alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-Q8_0-GGUF\nThis model was converted to GGUF format from 'alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #trl #sft #generated_from_trainer #llama-cpp #gguf-my-repo #dataset-generator #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us \n", "# DavidAU/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-Q8_0-GGUF\nThis model was converted to GGUF format from 'alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo-Q8_0-GGUF This model was converted to GGUF format from [`alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo`](https://huggingface.co/alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo-Q8_0-GGUF --model tinyllama-1.1b-chat-v1.0-reasoning-v2-dpo.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo-Q8_0-GGUF --model tinyllama-1.1b-chat-v1.0-reasoning-v2-dpo.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-chat-v1.0-reasoning-v2-dpo.Q8_0.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["trl", "dpo", "generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2", "model-index": [{"name": "TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo", "results": []}]}
DavidAU/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo-Q8_0-GGUF
null
[ "gguf", "trl", "dpo", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "base_model:alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:32:03+00:00
[]
[]
TAGS #gguf #trl #dpo #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2 #license-apache-2.0 #region-us
# DavidAU/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo-Q8_0-GGUF This model was converted to GGUF format from 'alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo-Q8_0-GGUF\nThis model was converted to GGUF format from 'alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #trl #dpo #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2 #license-apache-2.0 #region-us \n", "# DavidAU/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo-Q8_0-GGUF\nThis model was converted to GGUF format from 'alexredna/TinyLlama-1.1B-Chat-v1.0-reasoning-v2-dpo' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/TinyllamaMix-1.1B-Q8_0-GGUF This model was converted to GGUF format from [`Aryanne/TinyllamaMix-1.1B`](https://huggingface.co/Aryanne/TinyllamaMix-1.1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Aryanne/TinyllamaMix-1.1B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyllamaMix-1.1B-Q8_0-GGUF --model tinyllamamix-1.1b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyllamaMix-1.1B-Q8_0-GGUF --model tinyllamamix-1.1b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllamamix-1.1b.Q8_0.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "inference": {"parameters": {"temperature": 0.79}}, "widget": [{"messages": [{"role": "user", "content": "How to gain more money?"}]}], "model-index": [{"name": "TinyllamaMix-1.1B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 31.48, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/TinyllamaMix-1.1B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 48.39, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/TinyllamaMix-1.1B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 25.05, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/TinyllamaMix-1.1B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 33.45}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/TinyllamaMix-1.1B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 58.48, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/TinyllamaMix-1.1B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 1.06, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/TinyllamaMix-1.1B", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/TinyllamaMix-1.1B-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "license:apache-2.0", "model-index", "region:us" ]
null
2024-04-20T08:33:26+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #model-index #region-us
# DavidAU/TinyllamaMix-1.1B-Q8_0-GGUF This model was converted to GGUF format from 'Aryanne/TinyllamaMix-1.1B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyllamaMix-1.1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'Aryanne/TinyllamaMix-1.1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #license-apache-2.0 #model-index #region-us \n", "# DavidAU/TinyllamaMix-1.1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'Aryanne/TinyllamaMix-1.1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
automated-finetunning/bart_fulltraining_2
null
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T08:34:21+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bart #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bart #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: 152334H/miqu-1-70b-sf layer_range: - 0 - 79 - model: 152334H/miqu-1-70b-sf layer_range: - 0 - 79 merge_method: slerp base_model: 152334H/miqu-1-70b-sf parameters: t: - filter: self_attn value: - 0 - 0.5 - 0.3 - 0.7 - 1 - filter: mlp value: - 1 - 0.5 - 0.7 - 0.3 - 0 - value: 0.5 dtype: bfloat16 ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["152334H/miqu-1-70b-sf"]}
NotAiLOL/Knight-Miqu-70B
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "base_model:152334H/miqu-1-70b-sf", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T08:35:54+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-152334H/miqu-1-70b-sf #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * 152334H/miqu-1-70b-sf ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* 152334H/miqu-1-70b-sf", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #conversational #base_model-152334H/miqu-1-70b-sf #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* 152334H/miqu-1-70b-sf", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper ORF Bundeslaender This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the ZIB2 Common Voice dataset. It achieves the following results on the evaluation set: - Loss: 0.7038 - Wer: 27.2689 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - training_steps: 8000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-------:|:----:|:---------------:|:-------:| | 0.4896 | 1.7153 | 1000 | 0.6019 | 26.6961 | | 0.3559 | 3.4305 | 2000 | 0.6038 | 26.6192 | | 0.259 | 5.1458 | 3000 | 0.6216 | 33.8450 | | 0.3272 | 6.8611 | 4000 | 0.6382 | 27.0730 | | 0.2413 | 8.5763 | 5000 | 0.6704 | 31.3207 | | 0.1691 | 10.2916 | 6000 | 0.6922 | 27.2466 | | 0.1702 | 12.0069 | 7000 | 0.7008 | 27.3284 | | 0.1726 | 13.7221 | 8000 | 0.7038 | 27.2689 | ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["de"], "license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "datasets": ["rmacek/ORF-whisper-small"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper ORF Bundeslaender", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "ZIB2 Common Voice", "type": "rmacek/ORF-whisper-small", "args": "config: de, split: test"}, "metrics": [{"type": "wer", "value": 27.268895060503866, "name": "Wer"}]}]}]}
rmacek/ORF-small-de
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "de", "dataset:rmacek/ORF-whisper-small", "base_model:openai/whisper-small", "license:apache-2.0", "model-index", "region:us" ]
null
2024-04-20T08:38:21+00:00
[]
[ "de" ]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #de #dataset-rmacek/ORF-whisper-small #base_model-openai/whisper-small #license-apache-2.0 #model-index #region-us
Whisper ORF Bundeslaender ========================= This model is a fine-tuned version of openai/whisper-small on the ZIB2 Common Voice dataset. It achieves the following results on the evaluation set: * Loss: 0.7038 * Wer: 27.2689 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * training\_steps: 8000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.1.dev0 * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* training\\_steps: 8000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #de #dataset-rmacek/ORF-whisper-small #base_model-openai/whisper-small #license-apache-2.0 #model-index #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* training\\_steps: 8000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
null
null
# DavidAU/TinyOpenHermes-1.1B-4k-Q8_0-GGUF This model was converted to GGUF format from [`Ba2han/TinyOpenHermes-1.1B-4k`](https://huggingface.co/Ba2han/TinyOpenHermes-1.1B-4k) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Ba2han/TinyOpenHermes-1.1B-4k) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyOpenHermes-1.1B-4k-Q8_0-GGUF --model tinyopenhermes-1.1b-4k.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyOpenHermes-1.1B-4k-Q8_0-GGUF --model tinyopenhermes-1.1b-4k.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyopenhermes-1.1b-4k.Q8_0.gguf -n 128 ```
{"license": "cc-by-nc-4.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["teknium/openhermes"]}
DavidAU/TinyOpenHermes-1.1B-4k-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "dataset:teknium/openhermes", "license:cc-by-nc-4.0", "region:us" ]
null
2024-04-20T08:39:21+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #dataset-teknium/openhermes #license-cc-by-nc-4.0 #region-us
# DavidAU/TinyOpenHermes-1.1B-4k-Q8_0-GGUF This model was converted to GGUF format from 'Ba2han/TinyOpenHermes-1.1B-4k' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyOpenHermes-1.1B-4k-Q8_0-GGUF\nThis model was converted to GGUF format from 'Ba2han/TinyOpenHermes-1.1B-4k' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #dataset-teknium/openhermes #license-cc-by-nc-4.0 #region-us \n", "# DavidAU/TinyOpenHermes-1.1B-4k-Q8_0-GGUF\nThis model was converted to GGUF format from 'Ba2han/TinyOpenHermes-1.1B-4k' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
null
# DavidAU/TinyLlama-1.1bee-Q8_0-GGUF This model was converted to GGUF format from [`BEE-spoke-data/TinyLlama-1.1bee`](https://huggingface.co/BEE-spoke-data/TinyLlama-1.1bee) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BEE-spoke-data/TinyLlama-1.1bee) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyLlama-1.1bee-Q8_0-GGUF --model tinyllama-1.1bee.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyLlama-1.1bee-Q8_0-GGUF --model tinyllama-1.1bee.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1bee.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["bees", "beekeeping", "honey", "llama-cpp", "gguf-my-repo"], "datasets": ["BEE-spoke-data/bees-internal"], "metrics": ["accuracy"], "base_model": "PY007/TinyLlama-1.1B-intermediate-step-240k-503b", "inference": {"parameters": {"max_new_tokens": 64, "do_sample": true, "renormalize_logits": true, "repetition_penalty": 1.05, "no_repeat_ngram_size": 6, "temperature": 0.9, "top_p": 0.95, "epsilon_cutoff": 0.0008}}, "widget": [{"text": "In beekeeping, the term \"queen excluder\" refers to", "example_title": "Queen Excluder"}, {"text": "One way to encourage a honey bee colony to produce more honey is by", "example_title": "Increasing Honey Production"}, {"text": "The lifecycle of a worker bee consists of several stages, starting with", "example_title": "Lifecycle of a Worker Bee"}, {"text": "Varroa destructor is a type of mite that", "example_title": "Varroa Destructor"}, {"text": "In the world of beekeeping, the acronym PPE stands for", "example_title": "Beekeeping PPE"}, {"text": "The term \"robbing\" in beekeeping refers to the act of", "example_title": "Robbing in Beekeeping"}, {"text": "Question: What's the primary function of drone bees in a hive?\nAnswer:", "example_title": "Role of Drone Bees"}, {"text": "To harvest honey from a hive, beekeepers often use a device known as a", "example_title": "Honey Harvesting Device"}, {"text": "Problem: You have a hive that produces 60 pounds of honey per year. You decide to split the hive into two. Assuming each hive now produces at a 70% rate compared to before, how much honey will you get from both hives next year?\nTo calculate", "example_title": "Beekeeping Math Problem"}, {"text": "In beekeeping, \"swarming\" is the process where", "example_title": "Swarming"}], "pipeline_tag": "text-generation"}
DavidAU/TinyLlama-1.1bee-Q8_0-GGUF
null
[ "gguf", "bees", "beekeeping", "honey", "llama-cpp", "gguf-my-repo", "text-generation", "en", "dataset:BEE-spoke-data/bees-internal", "base_model:PY007/TinyLlama-1.1B-intermediate-step-240k-503b", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:40:20+00:00
[]
[ "en" ]
TAGS #gguf #bees #beekeeping #honey #llama-cpp #gguf-my-repo #text-generation #en #dataset-BEE-spoke-data/bees-internal #base_model-PY007/TinyLlama-1.1B-intermediate-step-240k-503b #license-apache-2.0 #region-us
# DavidAU/TinyLlama-1.1bee-Q8_0-GGUF This model was converted to GGUF format from 'BEE-spoke-data/TinyLlama-1.1bee' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyLlama-1.1bee-Q8_0-GGUF\nThis model was converted to GGUF format from 'BEE-spoke-data/TinyLlama-1.1bee' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #bees #beekeeping #honey #llama-cpp #gguf-my-repo #text-generation #en #dataset-BEE-spoke-data/bees-internal #base_model-PY007/TinyLlama-1.1B-intermediate-step-240k-503b #license-apache-2.0 #region-us \n", "# DavidAU/TinyLlama-1.1bee-Q8_0-GGUF\nThis model was converted to GGUF format from 'BEE-spoke-data/TinyLlama-1.1bee' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-classification
transformers
# raccord/scenAIrio-classification ## Model Description The scenAIrio-classification-model is designed to classify parts of a movie script or scenario into one of three categories: NOTES, DIALOGUE, or SEQUENCE. It leverages a BERT transformer architecture to understand and classify text based on contextual nuances typical in scripts. ## Intended Use This model is intended for use in applications involving the processing and analysis of movie scripts or scenarios. It can help scriptwriters, editors, and directors to automatically categorize script segments, facilitating easier script breakdowns and edits. ## Training Data The model was trained on a dataset consisting of annotated movie scripts. Each part of the script was labeled as NOTES, DIALOGUE, or SEQUENCE. ## Training Procedure The model was trained using the following training arguments: - **Output Directory**: `./scenAIrio-modal` - **Training**: Enabled - **Evaluation**: Enabled - **Epochs**: 3 - **Training Batch Size per Device**: 16 - **Evaluation Batch Size per Device**: 32 - **Warmup Steps**: 100 - **Weight Decay**: 0.01 - **Logging**: Every 50 steps to `./multi-class-logs` - **Evaluation Strategy**: Every 50 steps - **Save Strategy**: Save checkpoints every 50 steps - **Best Model Loading**: At the end of training, the best performing model is loaded ## Model Architecture The model is based on a BERT transformer, specifically adapted for multi-class classification tasks. ## Evaluation Results | Phase | Loss | Accuracy | F1-Score | Precision | Recall | |--------|---------|----------|----------|-----------|--------| | Val | 0.21253 | 93.73% | 95.37% | 95.53% | 95.24% | | Train | 0.08378 | 97.94% | 98.47% | 98.56% | 98.39% | | Test | 0.26723 | 91.59% | 93.49% | 93.17% | 93.84% | ## Limitations - The model is specifically trained on French-language scripts and may not perform well with scripts in other languages. - Performance can vary significantly depending on the specific characteristics and formatting of the input scripts. ## Conclusion The scenAIrio-classification-model provides a robust tool for analyzing and categorizing parts of movie scripts. With high accuracy and precision, it is poised to be a valuable asset in the film and television industry.
{"language": ["fr"], "library_name": "transformers", "tags": ["pytorch"], "datasets": ["martinvanaud/scenario-2043-05042024"], "pipeline_tag": "text-classification"}
martinvanaud/scenAIrio-classification-model
null
[ "transformers", "safetensors", "bert", "text-classification", "pytorch", "fr", "dataset:martinvanaud/scenario-2043-05042024", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T08:40:38+00:00
[]
[ "fr" ]
TAGS #transformers #safetensors #bert #text-classification #pytorch #fr #dataset-martinvanaud/scenario-2043-05042024 #autotrain_compatible #endpoints_compatible #region-us
raccord/scenAIrio-classification ================================ Model Description ----------------- The scenAIrio-classification-model is designed to classify parts of a movie script or scenario into one of three categories: NOTES, DIALOGUE, or SEQUENCE. It leverages a BERT transformer architecture to understand and classify text based on contextual nuances typical in scripts. Intended Use ------------ This model is intended for use in applications involving the processing and analysis of movie scripts or scenarios. It can help scriptwriters, editors, and directors to automatically categorize script segments, facilitating easier script breakdowns and edits. Training Data ------------- The model was trained on a dataset consisting of annotated movie scripts. Each part of the script was labeled as NOTES, DIALOGUE, or SEQUENCE. Training Procedure ------------------ The model was trained using the following training arguments: * Output Directory: './scenAIrio-modal' * Training: Enabled * Evaluation: Enabled * Epochs: 3 * Training Batch Size per Device: 16 * Evaluation Batch Size per Device: 32 * Warmup Steps: 100 * Weight Decay: 0.01 * Logging: Every 50 steps to './multi-class-logs' * Evaluation Strategy: Every 50 steps * Save Strategy: Save checkpoints every 50 steps * Best Model Loading: At the end of training, the best performing model is loaded Model Architecture ------------------ The model is based on a BERT transformer, specifically adapted for multi-class classification tasks. Evaluation Results ------------------ Limitations ----------- * The model is specifically trained on French-language scripts and may not perform well with scripts in other languages. * Performance can vary significantly depending on the specific characteristics and formatting of the input scripts. Conclusion ---------- The scenAIrio-classification-model provides a robust tool for analyzing and categorizing parts of movie scripts. With high accuracy and precision, it is poised to be a valuable asset in the film and television industry.
[]
[ "TAGS\n#transformers #safetensors #bert #text-classification #pytorch #fr #dataset-martinvanaud/scenario-2043-05042024 #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
null
# DavidAU/TinyLlama-3T-1.1bee-Q8_0-GGUF This model was converted to GGUF format from [`BEE-spoke-data/TinyLlama-3T-1.1bee`](https://huggingface.co/BEE-spoke-data/TinyLlama-3T-1.1bee) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/BEE-spoke-data/TinyLlama-3T-1.1bee) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyLlama-3T-1.1bee-Q8_0-GGUF --model tinyllama-3t-1.1bee.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyLlama-3T-1.1bee-Q8_0-GGUF --model tinyllama-3t-1.1bee.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-3t-1.1bee.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["bees", "bzz", "honey", "oprah winfrey", "llama-cpp", "gguf-my-repo"], "datasets": ["BEE-spoke-data/bees-internal"], "metrics": ["accuracy"], "base_model": "TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "inference": {"parameters": {"max_new_tokens": 64, "do_sample": true, "renormalize_logits": true, "repetition_penalty": 1.05, "no_repeat_ngram_size": 6, "temperature": 0.9, "top_p": 0.95, "epsilon_cutoff": 0.0008}}, "widget": [{"text": "In beekeeping, the term \"queen excluder\" refers to", "example_title": "Queen Excluder"}, {"text": "One way to encourage a honey bee colony to produce more honey is by", "example_title": "Increasing Honey Production"}, {"text": "The lifecycle of a worker bee consists of several stages, starting with", "example_title": "Lifecycle of a Worker Bee"}, {"text": "Varroa destructor is a type of mite that", "example_title": "Varroa Destructor"}, {"text": "In the world of beekeeping, the acronym PPE stands for", "example_title": "Beekeeping PPE"}, {"text": "The term \"robbing\" in beekeeping refers to the act of", "example_title": "Robbing in Beekeeping"}, {"text": "Question: What's the primary function of drone bees in a hive?\nAnswer:", "example_title": "Role of Drone Bees"}, {"text": "To harvest honey from a hive, beekeepers often use a device known as a", "example_title": "Honey Harvesting Device"}, {"text": "Problem: You have a hive that produces 60 pounds of honey per year. You decide to split the hive into two. Assuming each hive now produces at a 70% rate compared to before, how much honey will you get from both hives next year?\nTo calculate", "example_title": "Beekeeping Math Problem"}, {"text": "In beekeeping, \"swarming\" is the process where", "example_title": "Swarming"}], "pipeline_tag": "text-generation", "model-index": [{"name": "TinyLlama-3T-1.1bee", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 33.79, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/TinyLlama-3T-1.1bee", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 60.29, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/TinyLlama-3T-1.1bee", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 25.86, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/TinyLlama-3T-1.1bee", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 38.13}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/TinyLlama-3T-1.1bee", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 60.22, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/TinyLlama-3T-1.1bee", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 0.45, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=BEE-spoke-data/TinyLlama-3T-1.1bee", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/TinyLlama-3T-1.1bee-Q8_0-GGUF
null
[ "gguf", "bees", "bzz", "honey", "oprah winfrey", "llama-cpp", "gguf-my-repo", "text-generation", "en", "dataset:BEE-spoke-data/bees-internal", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T", "license:apache-2.0", "model-index", "region:us" ]
null
2024-04-20T08:40:46+00:00
[]
[ "en" ]
TAGS #gguf #bees #bzz #honey #oprah winfrey #llama-cpp #gguf-my-repo #text-generation #en #dataset-BEE-spoke-data/bees-internal #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #model-index #region-us
# DavidAU/TinyLlama-3T-1.1bee-Q8_0-GGUF This model was converted to GGUF format from 'BEE-spoke-data/TinyLlama-3T-1.1bee' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyLlama-3T-1.1bee-Q8_0-GGUF\nThis model was converted to GGUF format from 'BEE-spoke-data/TinyLlama-3T-1.1bee' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #bees #bzz #honey #oprah winfrey #llama-cpp #gguf-my-repo #text-generation #en #dataset-BEE-spoke-data/bees-internal #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T #license-apache-2.0 #model-index #region-us \n", "# DavidAU/TinyLlama-3T-1.1bee-Q8_0-GGUF\nThis model was converted to GGUF format from 'BEE-spoke-data/TinyLlama-3T-1.1bee' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
keras
## Model description Fine-tuned bert-base-uncased model for AITA classification tasks The concept for this AITA classifier emerged thanks to a suggestion from my friend, [Venessa Tan](https://github.com/vennietweek), for our project in module CS5246 during the second semester of AY23/24 at the National University of Singapore. I had the opportunity to build and fine-tune this model from scratch. I am thankful for the contributions of my other group members [Ming Xuan](https://github.com/lmngxn) and [Hui Khang](https://github.com/hkkiat), who supported the project in valuable ways through data scraping and providing feedback. Find our main project [here](https://github.com/vennietweek/aita-analysis-tool) ## Intended uses & limitations Currently, it has limitations with shorter sequences. There are many edge cases that it doesn't perform well on. We hope this project inspires more developers to continue advancing this work, fostering greater ethical awareness in AI development. ## Training and evaluation data This model has been trained on [train.csv](https://huggingface.co/datasets/jeanong2/AITA-datasets) and evaluated on [test.csv](https://huggingface.co/datasets/jeanong2/AITA-datasets) ### Prediction Scores : - Precision: 0.8123 - Recall: 1.0000 - F1 Score: 0.8965 - Computed Accuracy: 0.9615 ## Example Run ```python from tensorflow.keras.models import load_model from huggingface_hub import from_pretrained_keras import tensorflow as tf from transformers import TFAutoModel, AutoTokenizer class BERTForClassification(tf.keras.Model): def __init__(self, bert_model, num_classes): super(BERTForClassification, self).__init__() self.bert = bert_model self.fc = tf.keras.layers.Dense(num_classes, activation='softmax') def call(self, inputs): x = self.bert(inputs)[1] return self.fc(x) bert_model = TFAutoModel.from_pretrained("bert-base-uncased") custom_objects = { 'BERTForClassification': BERTForClassification(bert_model, num_classes=2) } model = from_pretrained_keras("jeanong2/finetuned-bert-aita-classifier", custom_objects=custom_objects) tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") # Inference def inference_analysis(model, text): encoding = tokenizer(text, padding='max_length', truncation=True, max_length=512, return_tensors="tf") inputs = { 'input_ids': encoding['input_ids'], 'attention_mask': encoding['attention_mask'] } if 'token_type_ids' in encoding: inputs['token_type_ids'] = encoding['token_type_ids'] test_dataset = tf.data.Dataset.from_tensor_slices((inputs)) test_dataset = test_dataset.batch(1) predictions = model.predict(test_dataset) print("Probabilities for 0 and 1 :") print(predictions) text = '''AITA for making out with this dude's ex in front of him? | I play rugby with the guy in question (let's say, "Mark") but I don't usually hang out with him outside of matches and practice. He broke up with a woman (Jia) that I'm rather attracted to last week. For the sake of propriety, I had no real intention to make moves or anything. But last night I'm at a bar, and both Mark and Jia are there. I was at a table with some friends, and he was a couple tables over. Jia's with her friends as well but after a time comes over to my table and sits next to me, starts chatting me up. We flirt, and eventually she leans in and kisses me, and I reciprocate. I tend to think that PDA of that kind is a bit trashy so after a few seconds I get up with her and we go outside, but I can see that Mark has been watching the entire time. He makes a rude comment to both of us as we pass. Today at practice he picked a fight with me that would have come to blows if the other guys on the team hadn't held him back. He's steaming mad. I feel a little sorry for him, but at the moment I can't actually bring myself to feel bad about hooking up with Jia, or the fact that he was there for it. AITA here?''' inference_analysis(model, text) ``` ### Training hyperparameters The following hyperparameters were used during training: | Hyperparameters | Value | | :-- | :-- | | name | Adam | | weight_decay | None | | clipnorm | None | | global_clipnorm | None | | clipvalue | None | | use_ema | False | | ema_momentum | 0.99 | | ema_overwrite_frequency | None | | jit_compile | True | | is_legacy_optimizer | False | | learning_rate | 9.999999747378752e-06 | | beta_1 | 0.9 | | beta_2 | 0.999 | | epsilon | 1e-07 | | amsgrad | False | | training_precision | float32 |
{"language": ["en"], "license": "apache-2.0", "library_name": "keras", "datasets": ["jeanong2/AITA-datasets"]}
jeanong2/finetuned-bert-aita-classifier
null
[ "keras", "en", "dataset:jeanong2/AITA-datasets", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:41:19+00:00
[]
[ "en" ]
TAGS #keras #en #dataset-jeanong2/AITA-datasets #license-apache-2.0 #region-us
Model description ----------------- Fine-tuned bert-base-uncased model for AITA classification tasks The concept for this AITA classifier emerged thanks to a suggestion from my friend, Venessa Tan, for our project in module CS5246 during the second semester of AY23/24 at the National University of Singapore. I had the opportunity to build and fine-tune this model from scratch. I am thankful for the contributions of my other group members Ming Xuan and Hui Khang, who supported the project in valuable ways through data scraping and providing feedback. Find our main project here Intended uses & limitations --------------------------- Currently, it has limitations with shorter sequences. There are many edge cases that it doesn't perform well on. We hope this project inspires more developers to continue advancing this work, fostering greater ethical awareness in AI development. Training and evaluation data ---------------------------- This model has been trained on URL and evaluated on URL ### Prediction Scores : * Precision: 0.8123 * Recall: 1.0000 * F1 Score: 0.8965 * Computed Accuracy: 0.9615 Example Run ----------- ### Training hyperparameters The following hyperparameters were used during training:
[ "### Prediction Scores :\n\n\n* Precision: 0.8123\n* Recall: 1.0000\n* F1 Score: 0.8965\n* Computed Accuracy: 0.9615\n\n\nExample Run\n-----------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:" ]
[ "TAGS\n#keras #en #dataset-jeanong2/AITA-datasets #license-apache-2.0 #region-us \n", "### Prediction Scores :\n\n\n* Precision: 0.8123\n* Recall: 1.0000\n* F1 Score: 0.8965\n* Computed Accuracy: 0.9615\n\n\nExample Run\n-----------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:" ]
null
null
# DavidAU/TinyDolphin-2.8.1-1.1b-Q8_0-GGUF This model was converted to GGUF format from [`cognitivecomputations/TinyDolphin-2.8.1-1.1b`](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.1-1.1b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.1-1.1b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyDolphin-2.8.1-1.1b-Q8_0-GGUF --model tinydolphin-2.8.1-1.1b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyDolphin-2.8.1-1.1b-Q8_0-GGUF --model tinydolphin-2.8.1-1.1b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinydolphin-2.8.1-1.1b.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["cerebras/SlimPajama-627B", "bigcode/starcoderdata", "teknium/openhermes"]}
DavidAU/TinyDolphin-2.8.1-1.1b-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:cerebras/SlimPajama-627B", "dataset:bigcode/starcoderdata", "dataset:teknium/openhermes", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:41:58+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #dataset-cerebras/SlimPajama-627B #dataset-bigcode/starcoderdata #dataset-teknium/openhermes #license-apache-2.0 #region-us
# DavidAU/TinyDolphin-2.8.1-1.1b-Q8_0-GGUF This model was converted to GGUF format from 'cognitivecomputations/TinyDolphin-2.8.1-1.1b' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyDolphin-2.8.1-1.1b-Q8_0-GGUF\nThis model was converted to GGUF format from 'cognitivecomputations/TinyDolphin-2.8.1-1.1b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #dataset-cerebras/SlimPajama-627B #dataset-bigcode/starcoderdata #dataset-teknium/openhermes #license-apache-2.0 #region-us \n", "# DavidAU/TinyDolphin-2.8.1-1.1b-Q8_0-GGUF\nThis model was converted to GGUF format from 'cognitivecomputations/TinyDolphin-2.8.1-1.1b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/TinyDolphin-2.8.2-1.1b-laser-Q8_0-GGUF This model was converted to GGUF format from [`cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser`](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyDolphin-2.8.2-1.1b-laser-Q8_0-GGUF --model tinydolphin-2.8.2-1.1b-laser.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyDolphin-2.8.2-1.1b-laser-Q8_0-GGUF --model tinydolphin-2.8.2-1.1b-laser.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinydolphin-2.8.2-1.1b-laser.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["cerebras/SlimPajama-627B", "bigcode/starcoderdata", "teknium/openhermes"]}
DavidAU/TinyDolphin-2.8.2-1.1b-laser-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:cerebras/SlimPajama-627B", "dataset:bigcode/starcoderdata", "dataset:teknium/openhermes", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:42:11+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #dataset-cerebras/SlimPajama-627B #dataset-bigcode/starcoderdata #dataset-teknium/openhermes #license-apache-2.0 #region-us
# DavidAU/TinyDolphin-2.8.2-1.1b-laser-Q8_0-GGUF This model was converted to GGUF format from 'cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyDolphin-2.8.2-1.1b-laser-Q8_0-GGUF\nThis model was converted to GGUF format from 'cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #dataset-cerebras/SlimPajama-627B #dataset-bigcode/starcoderdata #dataset-teknium/openhermes #license-apache-2.0 #region-us \n", "# DavidAU/TinyDolphin-2.8.2-1.1b-laser-Q8_0-GGUF\nThis model was converted to GGUF format from 'cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/TinyDolphin-2.8-1.1b-Q8_0-GGUF This model was converted to GGUF format from [`cognitivecomputations/TinyDolphin-2.8-1.1b`](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8-1.1b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8-1.1b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyDolphin-2.8-1.1b-Q8_0-GGUF --model tinydolphin-2.8-1.1b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyDolphin-2.8-1.1b-Q8_0-GGUF --model tinydolphin-2.8-1.1b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinydolphin-2.8-1.1b.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["cerebras/SlimPajama-627B", "bigcode/starcoderdata", "teknium/openhermes"]}
DavidAU/TinyDolphin-2.8-1.1b-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:cerebras/SlimPajama-627B", "dataset:bigcode/starcoderdata", "dataset:teknium/openhermes", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:42:25+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #dataset-cerebras/SlimPajama-627B #dataset-bigcode/starcoderdata #dataset-teknium/openhermes #license-apache-2.0 #region-us
# DavidAU/TinyDolphin-2.8-1.1b-Q8_0-GGUF This model was converted to GGUF format from 'cognitivecomputations/TinyDolphin-2.8-1.1b' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyDolphin-2.8-1.1b-Q8_0-GGUF\nThis model was converted to GGUF format from 'cognitivecomputations/TinyDolphin-2.8-1.1b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #dataset-cerebras/SlimPajama-627B #dataset-bigcode/starcoderdata #dataset-teknium/openhermes #license-apache-2.0 #region-us \n", "# DavidAU/TinyDolphin-2.8-1.1b-Q8_0-GGUF\nThis model was converted to GGUF format from 'cognitivecomputations/TinyDolphin-2.8-1.1b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/Barcenas-Tiny-1.1b-DPO-Q8_0-GGUF This model was converted to GGUF format from [`Danielbrdz/Barcenas-Tiny-1.1b-DPO`](https://huggingface.co/Danielbrdz/Barcenas-Tiny-1.1b-DPO) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Danielbrdz/Barcenas-Tiny-1.1b-DPO) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Barcenas-Tiny-1.1b-DPO-Q8_0-GGUF --model barcenas-tiny-1.1b-dpo.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Barcenas-Tiny-1.1b-DPO-Q8_0-GGUF --model barcenas-tiny-1.1b-dpo.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m barcenas-tiny-1.1b-dpo.Q8_0.gguf -n 128 ```
{"language": ["en", "es"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Intel/orca_dpo_pairs"]}
DavidAU/Barcenas-Tiny-1.1b-DPO-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "es", "dataset:Intel/orca_dpo_pairs", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:42:39+00:00
[]
[ "en", "es" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #es #dataset-Intel/orca_dpo_pairs #license-apache-2.0 #region-us
# DavidAU/Barcenas-Tiny-1.1b-DPO-Q8_0-GGUF This model was converted to GGUF format from 'Danielbrdz/Barcenas-Tiny-1.1b-DPO' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Barcenas-Tiny-1.1b-DPO-Q8_0-GGUF\nThis model was converted to GGUF format from 'Danielbrdz/Barcenas-Tiny-1.1b-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #es #dataset-Intel/orca_dpo_pairs #license-apache-2.0 #region-us \n", "# DavidAU/Barcenas-Tiny-1.1b-DPO-Q8_0-GGUF\nThis model was converted to GGUF format from 'Danielbrdz/Barcenas-Tiny-1.1b-DPO' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/TinyLlama-1.1B-FFT-Test2-Q8_0-GGUF This model was converted to GGUF format from [`Dans-DiscountModels/TinyLlama-1.1B-FFT-Test2`](https://huggingface.co/Dans-DiscountModels/TinyLlama-1.1B-FFT-Test2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Dans-DiscountModels/TinyLlama-1.1B-FFT-Test2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyLlama-1.1B-FFT-Test2-Q8_0-GGUF --model tinyllama-1.1b-fft-test2.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyLlama-1.1B-FFT-Test2-Q8_0-GGUF --model tinyllama-1.1b-fft-test2.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-fft-test2.Q8_0.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["generated_from_trainer", "llama-cpp", "gguf-my-repo"], "base_model": "TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T", "model-index": [{"name": "out", "results": []}]}
DavidAU/TinyLlama-1.1B-FFT-Test2-Q8_0-GGUF
null
[ "gguf", "generated_from_trainer", "llama-cpp", "gguf-my-repo", "base_model:TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:42:53+00:00
[]
[]
TAGS #gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T #license-apache-2.0 #region-us
# DavidAU/TinyLlama-1.1B-FFT-Test2-Q8_0-GGUF This model was converted to GGUF format from 'Dans-DiscountModels/TinyLlama-1.1B-FFT-Test2' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyLlama-1.1B-FFT-Test2-Q8_0-GGUF\nThis model was converted to GGUF format from 'Dans-DiscountModels/TinyLlama-1.1B-FFT-Test2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #generated_from_trainer #llama-cpp #gguf-my-repo #base_model-TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T #license-apache-2.0 #region-us \n", "# DavidAU/TinyLlama-1.1B-FFT-Test2-Q8_0-GGUF\nThis model was converted to GGUF format from 'Dans-DiscountModels/TinyLlama-1.1B-FFT-Test2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/TinyLlama-1.1B-Chat-v1.0-intel-dpo-Q8_0-GGUF This model was converted to GGUF format from [`davanstrien/TinyLlama-1.1B-Chat-v1.0-intel-dpo`](https://huggingface.co/davanstrien/TinyLlama-1.1B-Chat-v1.0-intel-dpo) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/davanstrien/TinyLlama-1.1B-Chat-v1.0-intel-dpo) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyLlama-1.1B-Chat-v1.0-intel-dpo-Q8_0-GGUF --model tinyllama-1.1b-chat-v1.0-intel-dpo.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyLlama-1.1B-Chat-v1.0-intel-dpo-Q8_0-GGUF --model tinyllama-1.1b-chat-v1.0-intel-dpo.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-chat-v1.0-intel-dpo.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["dpo", "llama-cpp", "gguf-my-repo"], "datasets": ["argilla/distilabel-intel-orca-dpo-pairs"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
DavidAU/TinyLlama-1.1B-Chat-v1.0-intel-dpo-Q8_0-GGUF
null
[ "gguf", "dpo", "llama-cpp", "gguf-my-repo", "en", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:43:05+00:00
[]
[ "en" ]
TAGS #gguf #dpo #llama-cpp #gguf-my-repo #en #dataset-argilla/distilabel-intel-orca-dpo-pairs #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us
# DavidAU/TinyLlama-1.1B-Chat-v1.0-intel-dpo-Q8_0-GGUF This model was converted to GGUF format from 'davanstrien/TinyLlama-1.1B-Chat-v1.0-intel-dpo' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyLlama-1.1B-Chat-v1.0-intel-dpo-Q8_0-GGUF\nThis model was converted to GGUF format from 'davanstrien/TinyLlama-1.1B-Chat-v1.0-intel-dpo' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #dpo #llama-cpp #gguf-my-repo #en #dataset-argilla/distilabel-intel-orca-dpo-pairs #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us \n", "# DavidAU/TinyLlama-1.1B-Chat-v1.0-intel-dpo-Q8_0-GGUF\nThis model was converted to GGUF format from 'davanstrien/TinyLlama-1.1B-Chat-v1.0-intel-dpo' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/TinyLlama-1.1B-Remix-V.2-Q8_0-GGUF This model was converted to GGUF format from [`Deathsquad10/TinyLlama-1.1B-Remix-V.2`](https://huggingface.co/Deathsquad10/TinyLlama-1.1B-Remix-V.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Deathsquad10/TinyLlama-1.1B-Remix-V.2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyLlama-1.1B-Remix-V.2-Q8_0-GGUF --model tinyllama-1.1b-remix-v.2.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyLlama-1.1B-Remix-V.2-Q8_0-GGUF --model tinyllama-1.1b-remix-v.2.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-remix-v.2.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["cerebras/SlimPajama-627B", "bigcode/starcoderdata", "HuggingFaceH4/ultrachat_200k", "HuggingFaceH4/ultrafeedback_binarized"]}
DavidAU/TinyLlama-1.1B-Remix-V.2-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:cerebras/SlimPajama-627B", "dataset:bigcode/starcoderdata", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:HuggingFaceH4/ultrafeedback_binarized", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:43:24+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #dataset-cerebras/SlimPajama-627B #dataset-bigcode/starcoderdata #dataset-HuggingFaceH4/ultrachat_200k #dataset-HuggingFaceH4/ultrafeedback_binarized #license-apache-2.0 #region-us
# DavidAU/TinyLlama-1.1B-Remix-V.2-Q8_0-GGUF This model was converted to GGUF format from 'Deathsquad10/TinyLlama-1.1B-Remix-V.2' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyLlama-1.1B-Remix-V.2-Q8_0-GGUF\nThis model was converted to GGUF format from 'Deathsquad10/TinyLlama-1.1B-Remix-V.2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #dataset-cerebras/SlimPajama-627B #dataset-bigcode/starcoderdata #dataset-HuggingFaceH4/ultrachat_200k #dataset-HuggingFaceH4/ultrafeedback_binarized #license-apache-2.0 #region-us \n", "# DavidAU/TinyLlama-1.1B-Remix-V.2-Q8_0-GGUF\nThis model was converted to GGUF format from 'Deathsquad10/TinyLlama-1.1B-Remix-V.2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/tinyllama-medical-1.1b-Q8_0-GGUF This model was converted to GGUF format from [`emir12/tinyllama-medical-1.1b`](https://huggingface.co/emir12/tinyllama-medical-1.1b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/emir12/tinyllama-medical-1.1b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/tinyllama-medical-1.1b-Q8_0-GGUF --model tinyllama-medical-1.1b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/tinyllama-medical-1.1b-Q8_0-GGUF --model tinyllama-medical-1.1b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-medical-1.1b.Q8_0.gguf -n 128 ```
{"tags": ["llama-cpp", "gguf-my-repo"]}
DavidAU/tinyllama-medical-1.1b-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "region:us" ]
null
2024-04-20T08:43:40+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #region-us
# DavidAU/tinyllama-medical-1.1b-Q8_0-GGUF This model was converted to GGUF format from 'emir12/tinyllama-medical-1.1b' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/tinyllama-medical-1.1b-Q8_0-GGUF\nThis model was converted to GGUF format from 'emir12/tinyllama-medical-1.1b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n", "# DavidAU/tinyllama-medical-1.1b-Q8_0-GGUF\nThis model was converted to GGUF format from 'emir12/tinyllama-medical-1.1b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
question-answering
transformers
# DavidAU/DistiLabelOrca-TinyLLama-1.1B-Q8_0-GGUF This model was converted to GGUF format from [`eren23/DistiLabelOrca-TinyLLama-1.1B`](https://huggingface.co/eren23/DistiLabelOrca-TinyLLama-1.1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/eren23/DistiLabelOrca-TinyLLama-1.1B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/DistiLabelOrca-TinyLLama-1.1B-Q8_0-GGUF --model distilabelorca-tinyllama-1.1b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/DistiLabelOrca-TinyLLama-1.1B-Q8_0-GGUF --model distilabelorca-tinyllama-1.1b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m distilabelorca-tinyllama-1.1b.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["argilla/distilabel-intel-orca-dpo-pairs"], "pipeline_tag": "question-answering", "model-index": [{"name": "DistiLabelOrca-TinyLLama-1.1B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 36.18, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistiLabelOrca-TinyLLama-1.1B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 61.15, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistiLabelOrca-TinyLLama-1.1B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 25.09, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistiLabelOrca-TinyLLama-1.1B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 38.05}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistiLabelOrca-TinyLLama-1.1B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 60.85, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistiLabelOrca-TinyLLama-1.1B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 1.67, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=eren23/DistiLabelOrca-TinyLLama-1.1B", "name": "Open LLM Leaderboard"}}]}]}
DavidAU/DistiLabelOrca-TinyLLama-1.1B-Q8_0-GGUF
null
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "question-answering", "en", "dataset:argilla/distilabel-intel-orca-dpo-pairs", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-20T08:43:56+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama-cpp #gguf-my-repo #question-answering #en #dataset-argilla/distilabel-intel-orca-dpo-pairs #license-apache-2.0 #model-index #endpoints_compatible #region-us
# DavidAU/DistiLabelOrca-TinyLLama-1.1B-Q8_0-GGUF This model was converted to GGUF format from 'eren23/DistiLabelOrca-TinyLLama-1.1B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/DistiLabelOrca-TinyLLama-1.1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'eren23/DistiLabelOrca-TinyLLama-1.1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #question-answering #en #dataset-argilla/distilabel-intel-orca-dpo-pairs #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "# DavidAU/DistiLabelOrca-TinyLLama-1.1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'eren23/DistiLabelOrca-TinyLLama-1.1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # byt5_5k This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0520 - eval_runtime: 10.8781 - eval_samples_per_second: 919.282 - eval_steps_per_second: 1.195 - epoch: 31.0 - step: 217 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 800 - eval_batch_size: 800 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.35.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "model-index": [{"name": "byt5_5k", "results": []}]}
AlexWang99/byt5_5k
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T08:45:01+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# byt5_5k This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0520 - eval_runtime: 10.8781 - eval_samples_per_second: 919.282 - eval_steps_per_second: 1.195 - epoch: 31.0 - step: 217 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 800 - eval_batch_size: 800 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Framework versions - Transformers 4.35.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# byt5_5k\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0520\n- eval_runtime: 10.8781\n- eval_samples_per_second: 919.282\n- eval_steps_per_second: 1.195\n- epoch: 31.0\n- step: 217", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 800\n- eval_batch_size: 800\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 50", "### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# byt5_5k\n\nThis model was trained from scratch on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.0520\n- eval_runtime: 10.8781\n- eval_samples_per_second: 919.282\n- eval_steps_per_second: 1.195\n- epoch: 31.0\n- step: 217", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 800\n- eval_batch_size: 800\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 50", "### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
null
# DavidAU/TinyNewsLlama-1.1B-Q8_0-GGUF This model was converted to GGUF format from [`h4rz3rk4s3/TinyNewsLlama-1.1B`](https://huggingface.co/h4rz3rk4s3/TinyNewsLlama-1.1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/h4rz3rk4s3/TinyNewsLlama-1.1B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyNewsLlama-1.1B-Q8_0-GGUF --model tinynewsllama-1.1b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyNewsLlama-1.1B-Q8_0-GGUF --model tinynewsllama-1.1b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinynewsllama-1.1b.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["TinyLlama", "QLoRA", "Politics", "News", "sft", "llama-cpp", "gguf-my-repo"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0", "pipeline_tag": "text-generation"}
DavidAU/TinyNewsLlama-1.1B-Q8_0-GGUF
null
[ "gguf", "TinyLlama", "QLoRA", "Politics", "News", "sft", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:45:04+00:00
[]
[ "en" ]
TAGS #gguf #TinyLlama #QLoRA #Politics #News #sft #llama-cpp #gguf-my-repo #text-generation #en #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us
# DavidAU/TinyNewsLlama-1.1B-Q8_0-GGUF This model was converted to GGUF format from 'h4rz3rk4s3/TinyNewsLlama-1.1B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyNewsLlama-1.1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'h4rz3rk4s3/TinyNewsLlama-1.1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #TinyLlama #QLoRA #Politics #News #sft #llama-cpp #gguf-my-repo #text-generation #en #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us \n", "# DavidAU/TinyNewsLlama-1.1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'h4rz3rk4s3/TinyNewsLlama-1.1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/TinyParlaMintLlama-1.1B-Q8_0-GGUF This model was converted to GGUF format from [`h4rz3rk4s3/TinyParlaMintLlama-1.1B`](https://huggingface.co/h4rz3rk4s3/TinyParlaMintLlama-1.1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/h4rz3rk4s3/TinyParlaMintLlama-1.1B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyParlaMintLlama-1.1B-Q8_0-GGUF --model tinyparlamintllama-1.1b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyParlaMintLlama-1.1B-Q8_0-GGUF --model tinyparlamintllama-1.1b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyparlamintllama-1.1b.Q8_0.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["TinyLlama", "QLoRA", "Politics", "EU", "sft", "llama-cpp", "gguf-my-repo"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
DavidAU/TinyParlaMintLlama-1.1B-Q8_0-GGUF
null
[ "gguf", "TinyLlama", "QLoRA", "Politics", "EU", "sft", "llama-cpp", "gguf-my-repo", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:45:17+00:00
[]
[]
TAGS #gguf #TinyLlama #QLoRA #Politics #EU #sft #llama-cpp #gguf-my-repo #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us
# DavidAU/TinyParlaMintLlama-1.1B-Q8_0-GGUF This model was converted to GGUF format from 'h4rz3rk4s3/TinyParlaMintLlama-1.1B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyParlaMintLlama-1.1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'h4rz3rk4s3/TinyParlaMintLlama-1.1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #TinyLlama #QLoRA #Politics #EU #sft #llama-cpp #gguf-my-repo #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us \n", "# DavidAU/TinyParlaMintLlama-1.1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'h4rz3rk4s3/TinyParlaMintLlama-1.1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/TinyPoliticaLlama-1.1B-Q8_0-GGUF This model was converted to GGUF format from [`h4rz3rk4s3/TinyPoliticaLlama-1.1B`](https://huggingface.co/h4rz3rk4s3/TinyPoliticaLlama-1.1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/h4rz3rk4s3/TinyPoliticaLlama-1.1B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyPoliticaLlama-1.1B-Q8_0-GGUF --model tinypoliticallama-1.1b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyPoliticaLlama-1.1B-Q8_0-GGUF --model tinypoliticallama-1.1b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinypoliticallama-1.1b.Q8_0.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["TinyLlama", "QLoRA", "Politics", "EU", "News", "sft", "llama-cpp", "gguf-my-repo"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
DavidAU/TinyPoliticaLlama-1.1B-Q8_0-GGUF
null
[ "gguf", "TinyLlama", "QLoRA", "Politics", "EU", "News", "sft", "llama-cpp", "gguf-my-repo", "base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:45:31+00:00
[]
[]
TAGS #gguf #TinyLlama #QLoRA #Politics #EU #News #sft #llama-cpp #gguf-my-repo #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us
# DavidAU/TinyPoliticaLlama-1.1B-Q8_0-GGUF This model was converted to GGUF format from 'h4rz3rk4s3/TinyPoliticaLlama-1.1B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyPoliticaLlama-1.1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'h4rz3rk4s3/TinyPoliticaLlama-1.1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #TinyLlama #QLoRA #Politics #EU #News #sft #llama-cpp #gguf-my-repo #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #region-us \n", "# DavidAU/TinyPoliticaLlama-1.1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'h4rz3rk4s3/TinyPoliticaLlama-1.1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/TinyPoliticaLlama-1.1B-slerp-Q8_0-GGUF This model was converted to GGUF format from [`h4rz3rk4s3/TinyPoliticaLlama-1.1B-slerp`](https://huggingface.co/h4rz3rk4s3/TinyPoliticaLlama-1.1B-slerp) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/h4rz3rk4s3/TinyPoliticaLlama-1.1B-slerp) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyPoliticaLlama-1.1B-slerp-Q8_0-GGUF --model tinypoliticallama-1.1b-slerp.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyPoliticaLlama-1.1B-slerp-Q8_0-GGUF --model tinypoliticallama-1.1b-slerp.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinypoliticallama-1.1b-slerp.Q8_0.gguf -n 128 ```
{"tags": ["merge", "mergekit", "lazymergekit", "h4rz3rk4s3/TinyNewsLlama-1.1B", "h4rz3rk4s3/TinyParlaMintLlama-1.1B", "llama-cpp", "gguf-my-repo"], "base_model": ["h4rz3rk4s3/TinyNewsLlama-1.1B", "h4rz3rk4s3/TinyParlaMintLlama-1.1B"]}
DavidAU/TinyPoliticaLlama-1.1B-slerp-Q8_0-GGUF
null
[ "gguf", "merge", "mergekit", "lazymergekit", "h4rz3rk4s3/TinyNewsLlama-1.1B", "h4rz3rk4s3/TinyParlaMintLlama-1.1B", "llama-cpp", "gguf-my-repo", "base_model:h4rz3rk4s3/TinyNewsLlama-1.1B", "base_model:h4rz3rk4s3/TinyParlaMintLlama-1.1B", "region:us" ]
null
2024-04-20T08:46:18+00:00
[]
[]
TAGS #gguf #merge #mergekit #lazymergekit #h4rz3rk4s3/TinyNewsLlama-1.1B #h4rz3rk4s3/TinyParlaMintLlama-1.1B #llama-cpp #gguf-my-repo #base_model-h4rz3rk4s3/TinyNewsLlama-1.1B #base_model-h4rz3rk4s3/TinyParlaMintLlama-1.1B #region-us
# DavidAU/TinyPoliticaLlama-1.1B-slerp-Q8_0-GGUF This model was converted to GGUF format from 'h4rz3rk4s3/TinyPoliticaLlama-1.1B-slerp' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyPoliticaLlama-1.1B-slerp-Q8_0-GGUF\nThis model was converted to GGUF format from 'h4rz3rk4s3/TinyPoliticaLlama-1.1B-slerp' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #merge #mergekit #lazymergekit #h4rz3rk4s3/TinyNewsLlama-1.1B #h4rz3rk4s3/TinyParlaMintLlama-1.1B #llama-cpp #gguf-my-repo #base_model-h4rz3rk4s3/TinyNewsLlama-1.1B #base_model-h4rz3rk4s3/TinyParlaMintLlama-1.1B #region-us \n", "# DavidAU/TinyPoliticaLlama-1.1B-slerp-Q8_0-GGUF\nThis model was converted to GGUF format from 'h4rz3rk4s3/TinyPoliticaLlama-1.1B-slerp' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
null
# DavidAU/alpaca-guanaco-japanese-gpt-1b-Q8_0-GGUF This model was converted to GGUF format from [`inu-ai/alpaca-guanaco-japanese-gpt-1b`](https://huggingface.co/inu-ai/alpaca-guanaco-japanese-gpt-1b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/inu-ai/alpaca-guanaco-japanese-gpt-1b) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/alpaca-guanaco-japanese-gpt-1b-Q8_0-GGUF --model alpaca-guanaco-japanese-gpt-1b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/alpaca-guanaco-japanese-gpt-1b-Q8_0-GGUF --model alpaca-guanaco-japanese-gpt-1b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m alpaca-guanaco-japanese-gpt-1b.Q8_0.gguf -n 128 ```
{"language": "ja", "license": "unknown", "tags": ["ja", "japanese", "gpt", "text-generation", "lm", "nlp", "conversational", "llama-cpp", "gguf-my-repo"], "datasets": ["JosephusCheung/GuanacoDataset", "yahma/alpaca-cleaned"], "widget": [{"text": "<s>\\n\u4ee5\u4e0b\u306f\u3001\u30bf\u30b9\u30af\u3092\u8aac\u660e\u3059\u308b\u6307\u793a\u3067\u3059\u3002\u8981\u6c42\u3092\u9069\u5207\u306b\u6e80\u305f\u3059\u5fdc\u7b54\u3092\u66f8\u304d\u306a\u3055\u3044\u3002\\n[SEP]\\n\u6307\u793a:\\n\u65e5\u672c\u3067\u4e00\u756a\u5e83\u3044\u6e56\u306f\uff1f\\n[SEP]\\n\u5fdc\u7b54:\\n"}]}
DavidAU/alpaca-guanaco-japanese-gpt-1b-Q8_0-GGUF
null
[ "gguf", "ja", "japanese", "gpt", "text-generation", "lm", "nlp", "conversational", "llama-cpp", "gguf-my-repo", "dataset:JosephusCheung/GuanacoDataset", "dataset:yahma/alpaca-cleaned", "license:unknown", "region:us" ]
null
2024-04-20T08:49:13+00:00
[]
[ "ja" ]
TAGS #gguf #ja #japanese #gpt #text-generation #lm #nlp #conversational #llama-cpp #gguf-my-repo #dataset-JosephusCheung/GuanacoDataset #dataset-yahma/alpaca-cleaned #license-unknown #region-us
# DavidAU/alpaca-guanaco-japanese-gpt-1b-Q8_0-GGUF This model was converted to GGUF format from 'inu-ai/alpaca-guanaco-japanese-gpt-1b' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/alpaca-guanaco-japanese-gpt-1b-Q8_0-GGUF\nThis model was converted to GGUF format from 'inu-ai/alpaca-guanaco-japanese-gpt-1b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #ja #japanese #gpt #text-generation #lm #nlp #conversational #llama-cpp #gguf-my-repo #dataset-JosephusCheung/GuanacoDataset #dataset-yahma/alpaca-cleaned #license-unknown #region-us \n", "# DavidAU/alpaca-guanaco-japanese-gpt-1b-Q8_0-GGUF\nThis model was converted to GGUF format from 'inu-ai/alpaca-guanaco-japanese-gpt-1b' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/TinyLlama-1.1B-1.5T-OpenOrca-Alpha-Q8_0-GGUF This model was converted to GGUF format from [`jeff31415/TinyLlama-1.1B-1.5T-OpenOrca-Alpha`](https://huggingface.co/jeff31415/TinyLlama-1.1B-1.5T-OpenOrca-Alpha) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jeff31415/TinyLlama-1.1B-1.5T-OpenOrca-Alpha) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyLlama-1.1B-1.5T-OpenOrca-Alpha-Q8_0-GGUF --model tinyllama-1.1b-1.5t-openorca-alpha.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyLlama-1.1B-1.5T-OpenOrca-Alpha-Q8_0-GGUF --model tinyllama-1.1b-1.5t-openorca-alpha.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-1.5t-openorca-alpha.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Open-Orca/OpenOrca", "bigcode/starcoderdata", "cerebras/SlimPajama-627B"]}
DavidAU/TinyLlama-1.1B-1.5T-OpenOrca-Alpha-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:Open-Orca/OpenOrca", "dataset:bigcode/starcoderdata", "dataset:cerebras/SlimPajama-627B", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:49:32+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #dataset-Open-Orca/OpenOrca #dataset-bigcode/starcoderdata #dataset-cerebras/SlimPajama-627B #license-apache-2.0 #region-us
# DavidAU/TinyLlama-1.1B-1.5T-OpenOrca-Alpha-Q8_0-GGUF This model was converted to GGUF format from 'jeff31415/TinyLlama-1.1B-1.5T-OpenOrca-Alpha' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyLlama-1.1B-1.5T-OpenOrca-Alpha-Q8_0-GGUF\nThis model was converted to GGUF format from 'jeff31415/TinyLlama-1.1B-1.5T-OpenOrca-Alpha' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #dataset-Open-Orca/OpenOrca #dataset-bigcode/starcoderdata #dataset-cerebras/SlimPajama-627B #license-apache-2.0 #region-us \n", "# DavidAU/TinyLlama-1.1B-1.5T-OpenOrca-Alpha-Q8_0-GGUF\nThis model was converted to GGUF format from 'jeff31415/TinyLlama-1.1B-1.5T-OpenOrca-Alpha' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
hi000000/insta_chai-llama3_100
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T08:49:39+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# DavidAU/TinyLlama-1.1B-1T-OpenOrca-Q8_0-GGUF This model was converted to GGUF format from [`jeff31415/TinyLlama-1.1B-1T-OpenOrca`](https://huggingface.co/jeff31415/TinyLlama-1.1B-1T-OpenOrca) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jeff31415/TinyLlama-1.1B-1T-OpenOrca) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/TinyLlama-1.1B-1T-OpenOrca-Q8_0-GGUF --model tinyllama-1.1b-1t-openorca.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/TinyLlama-1.1B-1T-OpenOrca-Q8_0-GGUF --model tinyllama-1.1b-1t-openorca.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-1t-openorca.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["Open-Orca/OpenOrca", "bigcode/starcoderdata", "cerebras/SlimPajama-627B"]}
DavidAU/TinyLlama-1.1B-1T-OpenOrca-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:Open-Orca/OpenOrca", "dataset:bigcode/starcoderdata", "dataset:cerebras/SlimPajama-627B", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:49:53+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #dataset-Open-Orca/OpenOrca #dataset-bigcode/starcoderdata #dataset-cerebras/SlimPajama-627B #license-apache-2.0 #region-us
# DavidAU/TinyLlama-1.1B-1T-OpenOrca-Q8_0-GGUF This model was converted to GGUF format from 'jeff31415/TinyLlama-1.1B-1T-OpenOrca' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/TinyLlama-1.1B-1T-OpenOrca-Q8_0-GGUF\nThis model was converted to GGUF format from 'jeff31415/TinyLlama-1.1B-1T-OpenOrca' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #dataset-Open-Orca/OpenOrca #dataset-bigcode/starcoderdata #dataset-cerebras/SlimPajama-627B #license-apache-2.0 #region-us \n", "# DavidAU/TinyLlama-1.1B-1T-OpenOrca-Q8_0-GGUF\nThis model was converted to GGUF format from 'jeff31415/TinyLlama-1.1B-1T-OpenOrca' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/tinyllama-1.1b-dpo-full-Q8_0-GGUF This model was converted to GGUF format from [`jeff31415/tinyllama-1.1b-dpo-full`](https://huggingface.co/jeff31415/tinyllama-1.1b-dpo-full) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/jeff31415/tinyllama-1.1b-dpo-full) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/tinyllama-1.1b-dpo-full-Q8_0-GGUF --model tinyllama-1.1b-dpo-full.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/tinyllama-1.1b-dpo-full-Q8_0-GGUF --model tinyllama-1.1b-dpo-full.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-dpo-full.Q8_0.gguf -n 128 ```
{"tags": ["llama-cpp", "gguf-my-repo"]}
DavidAU/tinyllama-1.1b-dpo-full-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "region:us" ]
null
2024-04-20T08:50:10+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #region-us
# DavidAU/tinyllama-1.1b-dpo-full-Q8_0-GGUF This model was converted to GGUF format from 'jeff31415/tinyllama-1.1b-dpo-full' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/tinyllama-1.1b-dpo-full-Q8_0-GGUF\nThis model was converted to GGUF format from 'jeff31415/tinyllama-1.1b-dpo-full' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n", "# DavidAU/tinyllama-1.1b-dpo-full-Q8_0-GGUF\nThis model was converted to GGUF format from 'jeff31415/tinyllama-1.1b-dpo-full' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
null
# DavidAU/Tiny-Vicuna-1B-Q8_0-GGUF This model was converted to GGUF format from [`Jiayi-Pan/Tiny-Vicuna-1B`](https://huggingface.co/Jiayi-Pan/Tiny-Vicuna-1B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Jiayi-Pan/Tiny-Vicuna-1B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Tiny-Vicuna-1B-Q8_0-GGUF --model tiny-vicuna-1b.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Tiny-Vicuna-1B-Q8_0-GGUF --model tiny-vicuna-1b.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tiny-vicuna-1b.Q8_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"]}
DavidAU/Tiny-Vicuna-1B-Q8_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:50:28+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #license-apache-2.0 #region-us
# DavidAU/Tiny-Vicuna-1B-Q8_0-GGUF This model was converted to GGUF format from 'Jiayi-Pan/Tiny-Vicuna-1B' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Tiny-Vicuna-1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'Jiayi-Pan/Tiny-Vicuna-1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #license-apache-2.0 #region-us \n", "# DavidAU/Tiny-Vicuna-1B-Q8_0-GGUF\nThis model was converted to GGUF format from 'Jiayi-Pan/Tiny-Vicuna-1B' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
<img src=https://huggingface.co/lodrick-the-lafted/Copus-2x8B/resolve/main/copus.png> MoE'd up: - [dreamgen/opus-v1.2-llama-3-8b](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b) - [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
{"license": "llama2"}
blockblockblock/Copus-2x8B-bpw5
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "5-bit", "region:us" ]
null
2024-04-20T08:51:07+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us
<img src=URL MoE'd up: - dreamgen/opus-v1.2-llama-3-8b - NousResearch/Meta-Llama-3-8B-Instruct_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
[]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #5-bit #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
suthawadee/member_thestreet
null
[ "transformers", "safetensors", "vision-encoder-decoder", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T08:52:06+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# DavidAU/falcon-7b-instruct-Q6_K-GGUF This model was converted to GGUF format from [`tiiuae/falcon-7b-instruct`](https://huggingface.co/tiiuae/falcon-7b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/tiiuae/falcon-7b-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/falcon-7b-instruct-Q6_K-GGUF --model falcon-7b-instruct.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/falcon-7b-instruct-Q6_K-GGUF --model falcon-7b-instruct.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m falcon-7b-instruct.Q6_K.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["tiiuae/falcon-refinedweb"], "inference": true, "widget": [{"text": "Hey Falcon! Any recommendations for my holidays in Abu Dhabi?", "example_title": "Abu Dhabi Trip"}, {"text": "What's the Everett interpretation of quantum mechanics?", "example_title": "Q/A: Quantum & Answers"}, {"text": "Give me a list of the top 10 dive sites you would recommend around the world.", "example_title": "Diving Top 10"}, {"text": "Can you tell me more about deep-water soloing?", "example_title": "Extreme sports"}, {"text": "Can you write a short tweet about the Apache 2.0 release of our latest AI model, Falcon LLM?", "example_title": "Twitter Helper"}, {"text": "What are the responsabilities of a Chief Llama Officer?", "example_title": "Trendy Jobs"}]}
DavidAU/falcon-7b-instruct-Q6_K-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:tiiuae/falcon-refinedweb", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:52:07+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #dataset-tiiuae/falcon-refinedweb #license-apache-2.0 #region-us
# DavidAU/falcon-7b-instruct-Q6_K-GGUF This model was converted to GGUF format from 'tiiuae/falcon-7b-instruct' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/falcon-7b-instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'tiiuae/falcon-7b-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #dataset-tiiuae/falcon-refinedweb #license-apache-2.0 #region-us \n", "# DavidAU/falcon-7b-instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'tiiuae/falcon-7b-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MPF-DialogLED-base-16384-samsum-3-epochs-finetuned This model is a fine-tuned version of [MingZhong/DialogLED-base-16384](https://huggingface.co/MingZhong/DialogLED-base-16384) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6140 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9274 | 0.76 | 175 | 1.8243 | | 1.7227 | 1.52 | 350 | 1.6736 | | 1.4865 | 2.28 | 525 | 1.6140 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "MingZhong/DialogLED-base-16384", "model-index": [{"name": "MPF-DialogLED-base-16384-samsum-3-epochs-finetuned", "results": []}]}
StDestiny/MPF-DialogLED-base-16384-samsum-3-epochs-finetuned
null
[ "transformers", "tensorboard", "safetensors", "led", "text2text-generation", "generated_from_trainer", "base_model:MingZhong/DialogLED-base-16384", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T08:52:48+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #led #text2text-generation #generated_from_trainer #base_model-MingZhong/DialogLED-base-16384 #autotrain_compatible #endpoints_compatible #region-us
MPF-DialogLED-base-16384-samsum-3-epochs-finetuned ================================================== This model is a fine-tuned version of MingZhong/DialogLED-base-16384 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.6140 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #led #text2text-generation #generated_from_trainer #base_model-MingZhong/DialogLED-base-16384 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
null
# DavidAU/mpt-7b-instruct-Q8_0-GGUF This model was converted to GGUF format from [`mosaicml/mpt-7b-instruct`](https://huggingface.co/mosaicml/mpt-7b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/mosaicml/mpt-7b-instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/mpt-7b-instruct-Q8_0-GGUF --model mpt-7b-instruct.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/mpt-7b-instruct-Q8_0-GGUF --model mpt-7b-instruct.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mpt-7b-instruct.Q8_0.gguf -n 128 ```
{"license": "apache-2.0", "tags": ["Composer", "MosaicML", "llm-foundry", "llama-cpp", "gguf-my-repo"], "datasets": ["mosaicml/dolly_hhrlhf"], "inference": false}
DavidAU/mpt-7b-instruct-Q8_0-GGUF
null
[ "gguf", "Composer", "MosaicML", "llm-foundry", "llama-cpp", "gguf-my-repo", "dataset:mosaicml/dolly_hhrlhf", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:53:23+00:00
[]
[]
TAGS #gguf #Composer #MosaicML #llm-foundry #llama-cpp #gguf-my-repo #dataset-mosaicml/dolly_hhrlhf #license-apache-2.0 #region-us
# DavidAU/mpt-7b-instruct-Q8_0-GGUF This model was converted to GGUF format from 'mosaicml/mpt-7b-instruct' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/mpt-7b-instruct-Q8_0-GGUF\nThis model was converted to GGUF format from 'mosaicml/mpt-7b-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #Composer #MosaicML #llm-foundry #llama-cpp #gguf-my-repo #dataset-mosaicml/dolly_hhrlhf #license-apache-2.0 #region-us \n", "# DavidAU/mpt-7b-instruct-Q8_0-GGUF\nThis model was converted to GGUF format from 'mosaicml/mpt-7b-instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
null
# DavidAU/CodeLlama-7b-Instruct-hf-Q6_K-GGUF This model was converted to GGUF format from [`codellama/CodeLlama-7b-Instruct-hf`](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/CodeLlama-7b-Instruct-hf-Q6_K-GGUF --model codellama-7b-instruct-hf.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/CodeLlama-7b-Instruct-hf-Q6_K-GGUF --model codellama-7b-instruct-hf.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m codellama-7b-instruct-hf.Q6_K.gguf -n 128 ```
{"language": ["code"], "license": "llama2", "tags": ["llama-2", "llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation"}
DavidAU/CodeLlama-7b-Instruct-hf-Q6_K-GGUF
null
[ "gguf", "llama-2", "llama-cpp", "gguf-my-repo", "text-generation", "code", "license:llama2", "region:us" ]
null
2024-04-20T08:55:28+00:00
[]
[ "code" ]
TAGS #gguf #llama-2 #llama-cpp #gguf-my-repo #text-generation #code #license-llama2 #region-us
# DavidAU/CodeLlama-7b-Instruct-hf-Q6_K-GGUF This model was converted to GGUF format from 'codellama/CodeLlama-7b-Instruct-hf' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/CodeLlama-7b-Instruct-hf-Q6_K-GGUF\nThis model was converted to GGUF format from 'codellama/CodeLlama-7b-Instruct-hf' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-2 #llama-cpp #gguf-my-repo #text-generation #code #license-llama2 #region-us \n", "# DavidAU/CodeLlama-7b-Instruct-hf-Q6_K-GGUF\nThis model was converted to GGUF format from 'codellama/CodeLlama-7b-Instruct-hf' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# DavidAU/Llama-2-7B-32K-Instruct-Q6_K-GGUF This model was converted to GGUF format from [`togethercomputer/Llama-2-7B-32K-Instruct`](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Llama-2-7B-32K-Instruct-Q6_K-GGUF --model llama-2-7b-32k-instruct.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Llama-2-7B-32K-Instruct-Q6_K-GGUF --model llama-2-7b-32k-instruct.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-2-7b-32k-instruct.Q6_K.gguf -n 128 ```
{"language": ["en"], "license": "llama2", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["togethercomputer/llama-instruct"]}
DavidAU/Llama-2-7B-32K-Instruct-Q6_K-GGUF
null
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:togethercomputer/llama-instruct", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-20T08:56:24+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama-cpp #gguf-my-repo #en #dataset-togethercomputer/llama-instruct #license-llama2 #endpoints_compatible #region-us
# DavidAU/Llama-2-7B-32K-Instruct-Q6_K-GGUF This model was converted to GGUF format from 'togethercomputer/Llama-2-7B-32K-Instruct' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Llama-2-7B-32K-Instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'togethercomputer/Llama-2-7B-32K-Instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #en #dataset-togethercomputer/llama-instruct #license-llama2 #endpoints_compatible #region-us \n", "# DavidAU/Llama-2-7B-32K-Instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'togethercomputer/Llama-2-7B-32K-Instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "unsloth/gemma-7b-bnb-4bit"}
PrahmodhRaj/Gemma-7B_Psychiatrist_Chat
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/gemma-7b-bnb-4bit", "region:us" ]
null
2024-04-20T08:56:55+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-unsloth/gemma-7b-bnb-4bit #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-unsloth/gemma-7b-bnb-4bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
null
# DavidAU/RedPajama-INCITE-7B-Instruct-Q6_K-GGUF This model was converted to GGUF format from [`togethercomputer/RedPajama-INCITE-7B-Instruct`](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/RedPajama-INCITE-7B-Instruct-Q6_K-GGUF --model redpajama-incite-7b-instruct.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/RedPajama-INCITE-7B-Instruct-Q6_K-GGUF --model redpajama-incite-7b-instruct.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m redpajama-incite-7b-instruct.Q6_K.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["togethercomputer/RedPajama-Data-1T", "togethercomputer/RedPajama-Data-Instruct"], "widget": [{"text": "Label the sentences as either 'positive', 'negative', 'mixed', or 'neutral': \n\nSentence: I can say that there isn't anything I would change.\nLabel: positive\n\nSentence: I'm not sure about this.\nLabel: neutral\n\nSentence: I liked some parts but I didn't like other parts.\nLabel: mixed\n\nSentence: I think the background image could have been better.\nLabel: negative\n\nSentence: I really like it.\nLabel:", "example_title": "Sentiment Analysis"}, {"text": "Please answer the following question:\n\nQuestion: What is the capital of Canada?\nAnswer: Ottawa\n\nQuestion: What is the currency of Switzerland?\nAnswer: Swiss franc\n\nQuestion: In which country is Wisconsin located?\nAnswer:", "example_title": "Question Answering"}, {"text": "Given a news article, classify its topic.\nPossible labels: 1. World 2. Sports 3. Business 4. Sci/Tech\n\nArticle: A nearby star thought to harbor comets and asteroids now appears to be home to planets, too.\nLabel: Sci/Tech\n\nArticle: Soaring crude prices plus worries about the economy and the outlook for earnings are expected to hang over the stock market next week during the depth of the summer doldrums.\nLabel: Business\n\nArticle: Murtagh a stickler for success Northeastern field hockey coach Cheryl Murtagh doesn't want the glare of the spotlight that shines on her to detract from a team that has been the America East champion for the past three years and has been to the NCAA tournament 13 times.\nLabel::", "example_title": "Topic Classification"}, {"text": "Paraphrase the given sentence into a different sentence.\n\nInput: Can you recommend some upscale restaurants in New York?\nOutput: What upscale restaurants do you recommend in New York?\n\nInput: What are the famous places we should not miss in Paris?\nOutput: Recommend some of the best places to visit in Paris?\n\nInput: Could you recommend some hotels that have cheap price in Zurich?\nOutput:", "example_title": "Paraphrasing"}, {"text": "Given a review from Amazon's food products, the task is to generate a short summary of the given review in the input.\n\nInput: I have bought several of the Vitality canned dog food products and have found them all to be of good quality. The product looks more like a stew than a processed meat and it smells better. My Labrador is finicky and she appreciates this product better than most.\nOutput: Good Quality Dog Food\n\nInput: Product arrived labeled as Jumbo Salted Peanuts...the peanuts were actually small sized unsalted. Not sure if this was an error or if the vendor intended to represent the product as 'Jumbo'.\nOutput: Not as Advertised\n\nInput: My toddler loves this game to a point where he asks for it. That's a big thing for me. Secondly, no glitching unlike one of their competitors (PlayShifu). Any tech I don\u2019t have to reach out to support for help is a good tech for me. I even enjoy some of the games and activities in this. Overall, this is a product that shows that the developers took their time and made sure people would not be asking for refund. I\u2019ve become bias regarding this product and honestly I look forward to buying more of this company\u2019s stuff. Please keep up the great work.\nOutput:", "example_title": "Text Summarization"}, {"text": "Identify which sense of a word is meant in a given context.\n\nContext: The river overflowed the bank.\nWord: bank\nSense: river bank\n\nContext: A mouse takes much more room than a trackball.\nWord: mouse\nSense: computer mouse\n\nContext: The bank will not be accepting cash on Saturdays.\nWord: bank\nSense: commercial (finance) banks\n\nContext: Bill killed the project\nWord: kill\nSense:", "example_title": "Word Sense Disambiguation"}, {"text": "Given a pair of sentences, choose whether the two sentences agree (entailment)/disagree (contradiction) with each other.\nPossible labels: 1. entailment 2. contradiction\n\nSentence 1: The skier was on the edge of the ramp. Sentence 2: The skier was dressed in winter clothes.\nLabel: entailment\n\nSentence 1: The boy skated down the staircase railing. Sentence 2: The boy is a newbie skater.\nLabel: contradiction\n\nSentence 1: Two middle-aged people stand by a golf hole. Sentence 2: A couple riding in a golf cart.\nLabel:", "example_title": "Natural Language Inference"}], "inference": {"parameters": {"temperature": 0.7, "top_p": 0.7, "top_k": 50, "max_new_tokens": 128}}}
DavidAU/RedPajama-INCITE-7B-Instruct-Q6_K-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:togethercomputer/RedPajama-Data-1T", "dataset:togethercomputer/RedPajama-Data-Instruct", "license:apache-2.0", "region:us" ]
null
2024-04-20T08:57:29+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #dataset-togethercomputer/RedPajama-Data-1T #dataset-togethercomputer/RedPajama-Data-Instruct #license-apache-2.0 #region-us
# DavidAU/RedPajama-INCITE-7B-Instruct-Q6_K-GGUF This model was converted to GGUF format from 'togethercomputer/RedPajama-INCITE-7B-Instruct' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/RedPajama-INCITE-7B-Instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'togethercomputer/RedPajama-INCITE-7B-Instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #dataset-togethercomputer/RedPajama-Data-1T #dataset-togethercomputer/RedPajama-Data-Instruct #license-apache-2.0 #region-us \n", "# DavidAU/RedPajama-INCITE-7B-Instruct-Q6_K-GGUF\nThis model was converted to GGUF format from 'togethercomputer/RedPajama-INCITE-7B-Instruct' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "unsloth/mistral-7b-instruct-v0.2-bnb-4bit"}
PrahmodhRaj/Mistral-7B_Psychiatrist_Chat
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit", "region:us" ]
null
2024-04-20T08:58:09+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-unsloth/mistral-7b-instruct-v0.2-bnb-4bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # financeLM_outputpath_stock_movement_prediction__5 This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9480 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.1739 | 1.0 | 1817 | 0.9646 | | 0.8999 | 2.0 | 3634 | 0.9382 | | 0.8036 | 3.0 | 5451 | 0.9450 | | 0.7465 | 4.0 | 7269 | 0.9450 | | 0.7163 | 5.0 | 9085 | 0.9480 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.5 - Tokenizers 0.14.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "openai-community/gpt2", "model-index": [{"name": "financeLM_outputpath_stock_movement_prediction__5", "results": []}]}
Supersaiyan1729/financeLM_outputpath_stock_movement_prediction__5
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:openai-community/gpt2", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T08:58:45+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-openai-community/gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
financeLM\_outputpath\_stock\_movement\_prediction\_\_5 ======================================================= This model is a fine-tuned version of openai-community/gpt2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.9480 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 4 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.03 * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.35.0 * Pytorch 2.1.2+cu121 * Datasets 2.14.5 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.5\n* Tokenizers 0.14.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-openai-community/gpt2 #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.35.0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.5\n* Tokenizers 0.14.1" ]
null
transformers
# DavidAU/BgGPT-7B-Instruct-v0.1-Q6_K-GGUF This model was converted to GGUF format from [`INSAIT-Institute/BgGPT-7B-Instruct-v0.1`](https://huggingface.co/INSAIT-Institute/BgGPT-7B-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/INSAIT-Institute/BgGPT-7B-Instruct-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/BgGPT-7B-Instruct-v0.1-Q6_K-GGUF --model bggpt-7b-instruct-v0.1.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/BgGPT-7B-Instruct-v0.1-Q6_K-GGUF --model bggpt-7b-instruct-v0.1.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m bggpt-7b-instruct-v0.1.Q6_K.gguf -n 128 ```
{"language": ["bg"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mistral", "instruct", "bggpt", "insait", "llama-cpp", "gguf-my-repo"], "base_model": "mistralai/Mistral-7B-v0.1"}
DavidAU/BgGPT-7B-Instruct-v0.1-Q6_K-GGUF
null
[ "transformers", "gguf", "mistral", "instruct", "bggpt", "insait", "llama-cpp", "gguf-my-repo", "bg", "base_model:mistralai/Mistral-7B-v0.1", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:01:26+00:00
[]
[ "bg" ]
TAGS #transformers #gguf #mistral #instruct #bggpt #insait #llama-cpp #gguf-my-repo #bg #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #endpoints_compatible #region-us
# DavidAU/BgGPT-7B-Instruct-v0.1-Q6_K-GGUF This model was converted to GGUF format from 'INSAIT-Institute/BgGPT-7B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/BgGPT-7B-Instruct-v0.1-Q6_K-GGUF\nThis model was converted to GGUF format from 'INSAIT-Institute/BgGPT-7B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #mistral #instruct #bggpt #insait #llama-cpp #gguf-my-repo #bg #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #endpoints_compatible #region-us \n", "# DavidAU/BgGPT-7B-Instruct-v0.1-Q6_K-GGUF\nThis model was converted to GGUF format from 'INSAIT-Institute/BgGPT-7B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
# DavidAU/Swallow-7b-instruct-hf-Q6_K-GGUF This model was converted to GGUF format from [`tokyotech-llm/Swallow-7b-instruct-hf`](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-hf) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Swallow-7b-instruct-hf-Q6_K-GGUF --model swallow-7b-instruct-hf.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Swallow-7b-instruct-hf-Q6_K-GGUF --model swallow-7b-instruct-hf.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m swallow-7b-instruct-hf.Q6_K.gguf -n 128 ```
{"language": ["en", "ja"], "license": "llama2", "library_name": "transformers", "tags": ["llama-cpp", "gguf-my-repo"], "pipeline_tag": "text-generation", "model_type": "llama"}
DavidAU/Swallow-7b-instruct-hf-Q6_K-GGUF
null
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "en", "ja", "license:llama2", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:02:25+00:00
[]
[ "en", "ja" ]
TAGS #transformers #gguf #llama-cpp #gguf-my-repo #text-generation #en #ja #license-llama2 #endpoints_compatible #region-us
# DavidAU/Swallow-7b-instruct-hf-Q6_K-GGUF This model was converted to GGUF format from 'tokyotech-llm/Swallow-7b-instruct-hf' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Swallow-7b-instruct-hf-Q6_K-GGUF\nThis model was converted to GGUF format from 'tokyotech-llm/Swallow-7b-instruct-hf' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #llama-cpp #gguf-my-repo #text-generation #en #ja #license-llama2 #endpoints_compatible #region-us \n", "# DavidAU/Swallow-7b-instruct-hf-Q6_K-GGUF\nThis model was converted to GGUF format from 'tokyotech-llm/Swallow-7b-instruct-hf' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
transformers
# DavidAU/Bielik-7B-Instruct-v0.1-Q6_K-GGUF This model was converted to GGUF format from [`speakleash/Bielik-7B-Instruct-v0.1`](https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/speakleash/Bielik-7B-Instruct-v0.1) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo DavidAU/Bielik-7B-Instruct-v0.1-Q6_K-GGUF --model bielik-7b-instruct-v0.1.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo DavidAU/Bielik-7B-Instruct-v0.1-Q6_K-GGUF --model bielik-7b-instruct-v0.1.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m bielik-7b-instruct-v0.1.Q6_K.gguf -n 128 ```
{"language": ["pl"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["finetuned", "llama-cpp", "gguf-my-repo"], "inference": {"parameters": {"temperature": 0.6}}, "widget": [{"messages": [{"role": "user", "content": "Co przedstawia polskie god\u0142o?"}]}]}
DavidAU/Bielik-7B-Instruct-v0.1-Q6_K-GGUF
null
[ "transformers", "gguf", "finetuned", "llama-cpp", "gguf-my-repo", "pl", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:03:59+00:00
[]
[ "pl" ]
TAGS #transformers #gguf #finetuned #llama-cpp #gguf-my-repo #pl #license-cc-by-nc-4.0 #endpoints_compatible #region-us
# DavidAU/Bielik-7B-Instruct-v0.1-Q6_K-GGUF This model was converted to GGUF format from 'speakleash/Bielik-7B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# DavidAU/Bielik-7B-Instruct-v0.1-Q6_K-GGUF\nThis model was converted to GGUF format from 'speakleash/Bielik-7B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #finetuned #llama-cpp #gguf-my-repo #pl #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n", "# DavidAU/Bielik-7B-Instruct-v0.1-Q6_K-GGUF\nThis model was converted to GGUF format from 'speakleash/Bielik-7B-Instruct-v0.1' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [Fizzarolli/llama-3-lust-8b-step-748](https://huggingface.co/Fizzarolli/llama-3-lust-8b-step-748) * [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: NousResearch/Meta-Llama-3-8B layer_range: [0, 24] - sources: - model: Fizzarolli/llama-3-lust-8b-step-748 layer_range: [8, 32] merge_method: passthrough dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Fizzarolli/llama-3-lust-8b-step-748", "NousResearch/Meta-Llama-3-8B"]}
mergekit-community/mergekit-passthrough-kijjzjp
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "base_model:Fizzarolli/llama-3-lust-8b-step-748", "base_model:NousResearch/Meta-Llama-3-8B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T09:07:38+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #base_model-Fizzarolli/llama-3-lust-8b-step-748 #base_model-NousResearch/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * Fizzarolli/llama-3-lust-8b-step-748 * NousResearch/Meta-Llama-3-8B ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* Fizzarolli/llama-3-lust-8b-step-748\n* NousResearch/Meta-Llama-3-8B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #base_model-Fizzarolli/llama-3-lust-8b-step-748 #base_model-NousResearch/Meta-Llama-3-8B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* Fizzarolli/llama-3-lust-8b-step-748\n* NousResearch/Meta-Llama-3-8B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
zandfj/LLaMA2-7B-Chat-dpo-042016
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:09:02+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
Niggendar/KenCanMix_v20beta
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-20T09:10:19+00:00
[ "1910.09700" ]
[]
TAGS #diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MODEL_D This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 - load_in_4bit: True - load_in_8bit: False ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.4.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "NousResearch/Llama-2-7b-hf", "model-index": [{"name": "MODEL_D", "results": []}]}
LLMLover/MODEL_D
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:NousResearch/Llama-2-7b-hf", "region:us" ]
null
2024-04-20T09:11:10+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-NousResearch/Llama-2-7b-hf #region-us
# MODEL_D This model is a fine-tuned version of NousResearch/Llama-2-7b-hf on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure The following 'bitsandbytes' quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 - load_in_4bit: True - load_in_8bit: False ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.4.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
[ "# MODEL_D\n\nThis model is a fine-tuned version of NousResearch/Llama-2-7b-hf on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n- load_in_4bit: True\n- load_in_8bit: False", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.4.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-NousResearch/Llama-2-7b-hf #region-us \n", "# MODEL_D\n\nThis model is a fine-tuned version of NousResearch/Llama-2-7b-hf on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- quant_method: bitsandbytes\n- _load_in_8bit: False\n- _load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n- load_in_4bit: True\n- load_in_8bit: False", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.4.0\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6265 | 0.54 | 500 | 1.4843 | ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "google/pegasus-cnn_dailymail", "model-index": [{"name": "pegasus-samsum", "results": []}]}
Francois2511/pegasus-samsum
null
[ "transformers", "safetensors", "pegasus", "text2text-generation", "generated_from_trainer", "base_model:google/pegasus-cnn_dailymail", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:11:45+00:00
[]
[]
TAGS #transformers #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-cnn_dailymail #autotrain_compatible #endpoints_compatible #region-us
pegasus-samsum ============== This model is a fine-tuned version of google/pegasus-cnn\_dailymail on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.4843 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.37.2 * Pytorch 2.2.0+cu121 * Datasets 2.17.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #pegasus #text2text-generation #generated_from_trainer #base_model-google/pegasus-cnn_dailymail #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.37.2\n* Pytorch 2.2.0+cu121\n* Datasets 2.17.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
0x0mom/st_21
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:12:30+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** surajgorai - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## Usage ```python from unsloth import FastLanguageModel from unsloth import FastLanguageModel import torch max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally! dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False. model, tokenizer = FastLanguageModel.from_pretrained( model_name = "surajgorai/llama_3_8b_text_to_sql_model", # YOUR MODEL YOU USED FOR TRAINING max_seq_length = max_seq_length, dtype = dtype, load_in_4bit = load_in_4bit, ) FastLanguageModel.for_inference(model) # Enable native 2x faster inference prompt = """You are a powerful text-to-SQL model. Your job is to answer questions about a database. You are given a question and context regarding one or more tables. You must output the SQL query that answers the question. ### Instruction: {} ### Input: {} ### Response: {}""" # alpaca_prompt = You MUST copy from above! inputs = tokenizer( [ prompt.format( 'Name the result/games for 54741', # instruction 'CREATE TABLE table_21436373_11 (result_games VARCHAR, attendance VARCHAR)', # input "", # output - leave this blank for generation! ) ], return_tensors = "pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True) tokenizer.batch_decode(outputs) #response : #['You are a powerful text-to-SQL model. Your job is to answer questions about a database. You are given a question and context regarding one or more tables. #\n\nYou must output the SQL query that answers the question.\n\n### Instruction:\nName the result/games for 54741\n\n### Input:\nCREATE TABLE table_21436373_11 (result_games VARCHAR, attendance VARCHAR) #\n\n### Response:\nSELECT result_games FROM table_21436373_11 WHERE attendance = "54741"<|end_of_text|>'] from transformers import TextStreamer text_streamer = TextStreamer(tokenizer) _ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128) #response #You are a powerful text-to-SQL model. Your job is to answer questions about a database. You are given a question and context regarding one or more tables. #You must output the SQL query that answers the question. ### Instruction: #Name the result/games for 54741 ### Input: #CREATE TABLE table_21436373_11 (result_games VARCHAR, attendance VARCHAR) ## Response: #SELECT result_games FROM table_21436373_11 WHERE attendance = "54741"<|end_of_text|> ```
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
surajgorai/llama_3_8b_text_to_sql_model
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:12:40+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: surajgorai - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/> ## Usage
[ "# Uploaded model\n\n- Developed by: surajgorai\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>", "## Usage" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: surajgorai\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>", "## Usage" ]
text-generation
transformers
<img src=https://huggingface.co/lodrick-the-lafted/Copus-2x8B/resolve/main/copus.png> MoE'd up: - [dreamgen/opus-v1.2-llama-3-8b](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b) - [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
{"license": "llama2"}
blockblockblock/Copus-2x8B-bpw5.5
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T09:15:44+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src=URL MoE'd up: - dreamgen/opus-v1.2-llama-3-8b - NousResearch/Meta-Llama-3-8B-Instruct_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
[]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
text2text-generation
transformers
## Model Summary This is a generative model designed specifically for search query rewriting, employing a sequence-to-sequence architecture for generating reformulated queries. It leverages a Reinforcement Learning framework to further boost performance, integrating a policy gradient algorithm. The model is trained with reward functions aimed at diversifying the generated queries by paraphrasing keywords. It can be integrated with sparse retrieval methods, such as bm25-based retrieval, to enhance document recall in search. ### Intended use cases Query rewriting for search (web, e-commerce), Virtual assistants and chatbots, Information retrieval ### Model Description Training Procedure 1. The training process begins by initializing the sequence-to-sequence model with Google's [T5-base model ](https://huggingface.co/google-t5/t5-base). 2. Initially, the model undergoes supervised training using the [MS-MARCO query pairs dataset](https://github.com/Narabzad/msmarco-query-reformulation/tree/main/datasets/queries) 3. Subsequently, the model is fine-tuned using a reinforcement learning (RL) framework to enhance its ability to generate queries that are both diverse and relevant. 4. It uses a policy gradient approach to fine-tune the model. For a given input query, a set of trajectories (reformulated queries) are sampled from the model and reward is computed. Policy gradient algorithm is applied to update the model. 5. Rewards are heuristically computed to enhance the model's paraphrasing capability. However, these rewards can be substituted with other domain-specific or goal-specific reward functions as needed. Refer [here](https://github.com/PraveenSH/RL-Query-Reformulation) for more details. ### Model Sources - **Repository:** https://github.com/PraveenSH/RL-Query-Reformulation ### How to use For optimal utilization of this model, use sampling with repetition penalty to generate diverse samples. Below is the provided sample code. ```python import torch from transformers import T5ForConditionalGeneration, T5Tokenizer MODEL_ID = "prhegde/t5-query-reformulation-RL" tokenizer = T5Tokenizer.from_pretrained(MODEL_ID) model = T5ForConditionalGeneration.from_pretrained(MODEL_ID) model.eval() input_sequence = "how to bake great cookie" input_ids = tokenizer(input_sequence, return_tensors="pt").input_ids print(f'Input: {input_sequence}') nsent = 4 with torch.no_grad(): for i in range(nsent): output = model.generate(input_ids, max_length=35, num_beams=1, do_sample=True, repetition_penalty=1.8) target_sequence = tokenizer.decode(output[0], skip_special_tokens=True) print(f'Target: {target_sequence}') ```
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["code"], "datasets": ["ms_marco"], "pipeline_tag": "text2text-generation", "widget": [{"text": "how to bake perfect cookie", "pipeline_tag": "text2text-generation"}], "inference_config": {"generation_config": {"max_length": 35, "num_beams": 1, "do_sample": true, "repetition_penalty": 1.8}}}
prhegde/t5-query-reformulation-RL
null
[ "transformers", "safetensors", "t5", "text2text-generation", "code", "en", "dataset:ms_marco", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T09:16:23+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #t5 #text2text-generation #code #en #dataset-ms_marco #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
## Model Summary This is a generative model designed specifically for search query rewriting, employing a sequence-to-sequence architecture for generating reformulated queries. It leverages a Reinforcement Learning framework to further boost performance, integrating a policy gradient algorithm. The model is trained with reward functions aimed at diversifying the generated queries by paraphrasing keywords. It can be integrated with sparse retrieval methods, such as bm25-based retrieval, to enhance document recall in search. ### Intended use cases Query rewriting for search (web, e-commerce), Virtual assistants and chatbots, Information retrieval ### Model Description Training Procedure 1. The training process begins by initializing the sequence-to-sequence model with Google's T5-base model . 2. Initially, the model undergoes supervised training using the MS-MARCO query pairs dataset 3. Subsequently, the model is fine-tuned using a reinforcement learning (RL) framework to enhance its ability to generate queries that are both diverse and relevant. 4. It uses a policy gradient approach to fine-tune the model. For a given input query, a set of trajectories (reformulated queries) are sampled from the model and reward is computed. Policy gradient algorithm is applied to update the model. 5. Rewards are heuristically computed to enhance the model's paraphrasing capability. However, these rewards can be substituted with other domain-specific or goal-specific reward functions as needed. Refer here for more details. ### Model Sources - Repository: URL ### How to use For optimal utilization of this model, use sampling with repetition penalty to generate diverse samples. Below is the provided sample code.
[ "## Model Summary\nThis is a generative model designed specifically for search query rewriting, employing a sequence-to-sequence architecture for generating reformulated queries. It leverages a Reinforcement Learning framework to further boost performance, integrating a policy gradient algorithm. The model is trained with reward functions aimed at diversifying the generated queries by paraphrasing keywords. It can be integrated with sparse retrieval methods, such as bm25-based retrieval, to enhance document recall in search.", "### Intended use cases\nQuery rewriting for search (web, e-commerce), Virtual assistants and chatbots, Information retrieval", "### Model Description\n\nTraining Procedure\n\n1. The training process begins by initializing the sequence-to-sequence model with Google's T5-base model .\n2. Initially, the model undergoes supervised training using the MS-MARCO query pairs dataset\n3. Subsequently, the model is fine-tuned using a reinforcement learning (RL) framework to enhance its ability to generate queries that are both diverse and relevant.\n4. It uses a policy gradient approach to fine-tune the model. For a given input query, a set of trajectories (reformulated queries) are sampled from the model and reward is computed. Policy gradient algorithm is applied to update the model.\n5. Rewards are heuristically computed to enhance the model's paraphrasing capability. However, these rewards can be substituted with other domain-specific or goal-specific reward functions as needed.\n\nRefer here for more details.", "### Model Sources\n\n\n- Repository: URL", "### How to use\nFor optimal utilization of this model, use sampling with repetition penalty to generate diverse samples. Below is the provided sample code." ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #code #en #dataset-ms_marco #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## Model Summary\nThis is a generative model designed specifically for search query rewriting, employing a sequence-to-sequence architecture for generating reformulated queries. It leverages a Reinforcement Learning framework to further boost performance, integrating a policy gradient algorithm. The model is trained with reward functions aimed at diversifying the generated queries by paraphrasing keywords. It can be integrated with sparse retrieval methods, such as bm25-based retrieval, to enhance document recall in search.", "### Intended use cases\nQuery rewriting for search (web, e-commerce), Virtual assistants and chatbots, Information retrieval", "### Model Description\n\nTraining Procedure\n\n1. The training process begins by initializing the sequence-to-sequence model with Google's T5-base model .\n2. Initially, the model undergoes supervised training using the MS-MARCO query pairs dataset\n3. Subsequently, the model is fine-tuned using a reinforcement learning (RL) framework to enhance its ability to generate queries that are both diverse and relevant.\n4. It uses a policy gradient approach to fine-tune the model. For a given input query, a set of trajectories (reformulated queries) are sampled from the model and reward is computed. Policy gradient algorithm is applied to update the model.\n5. Rewards are heuristically computed to enhance the model's paraphrasing capability. However, these rewards can be substituted with other domain-specific or goal-specific reward functions as needed.\n\nRefer here for more details.", "### Model Sources\n\n\n- Repository: URL", "### How to use\nFor optimal utilization of this model, use sampling with repetition penalty to generate diverse samples. Below is the provided sample code." ]
text-generation
transformers
# OpenAI ChatGPT-2 ![examples](https://huggingface.co/anezatra/chat-gpt2/raw/main/img.jpg) ## Model description Generative Pre-trained Transformer 2 (GPT-2), developed by OpenAI, represents the second iteration in their foundational series of GPT models. GPT-2 embarked on its journey with a substantial dataset comprising 8 million web pages. Initially unveiled in February 2019, it reached its pinnacle with the full release of the 1.5-billion-parameter model on November 5, 2019. GPT-2 emerged as a direct evolution from its predecessor, GPT-1, boasting a tenfold augmentation in both parameter count and training dataset magnitude. Positioned as a versatile learner, its prowess across diverse tasks stemmed from its innate capacity to accurately prognosticate the subsequent item in a sequence. This predictive prowess endowed it with the capability to engage in text translation, answer inquiries derived from textual contexts, distill concise summaries from extensive passages, and produce text outputs rivalling human composition. Nonetheless, it occasionally exhibited tendencies towards repetitiveness or tangential incoherence, particularly when tasked with generating lengthy passages. Architecturally akin to its antecedent GPT-1 and progeny GPT-3 and GPT-4, GPT-2 features a generative pre-trained transformer architecture, underpinned by a deep neural network framework, specifically a transformer model. Departing from antiquated recurrence- and convolution-based designs, this architecture capitalizes on attention mechanisms. These mechanisms afford the model the capability to selectively concentrate on segments of input text deemed most pertinent. This transformative architectural paradigm facilitates enhanced parallelization, markedly surpassing preceding benchmarks established by RNN/CNN/LSTM-based models. ## Training The transformer architecture provides a capability that allows GPT models to be trained on larger datasets compared to previous NLP (natural language processing) models. The GPT-1 model demonstrated the validity of this approach; however, GPT-2 aimed to further investigate the emergent properties of networks trained on extremely large datasets. CommonCrawl, a large corpus previously used to train NLP systems, was considered due to its extensive size. However, further examination revealed that much of the content was unintelligible. Consequently, OpenAI developed a new dataset called WebText. Instead of indiscriminately scraping content from the World Wide Web, WebText collected content only from pages linked to by Reddit posts that had received at least three upvotes prior to December 2017. The dataset was then cleaned; HTML documents were parsed into plain text, duplicate pages were removed, and Wikipedia pages were excluded due to the risk of overfitting, as they were prevalent in many other datasets. Additionally, this model was retrained using the OpenWebText corpus by Anezatra. Utilizing DistilGPT, the model was aimed at reducing its size to create a lighter and more efficient version. The DistilGPT technique maintains the model's learning capabilities while reducing the number of parameters, thus speeding up training and inference processes and utilizing resources more efficiently. ## How to use ```python # pip install git+https://github.com/huggingface/transformers.git # pip install accelerate # pip install torch from transformers import pipeline text_generator = pipeline("text-generation", model="anezatra/chat-gpt2", tokenizer="anezatra/chat-gpt2") prompt = "question: About psychologists?\nanswer:" generated_text = text_generator(prompt, max_length=1000, num_return_sequences=1) print(generated_text[0]["generated_text"]) ``` ## Example Output ```question: About psychologists answer: We can list what I have to say about psychologists as follows: 1) There is no direct correlation between age and behavior that goes beyond a single issue or point. This can make the difference that if you have a good therapist in there to help you develop a functioning and functioning mental health system, chances of going through these issues are very low. 2) No one can make this question unanswerable. 3) This is not the case. 4) People are asked "Which psychiatrist was best for ADHD?" and "Which way did your patient get it?" What advice for them? What advice they give you about psychotherapy therapy? How do they give you therapy? Which therapy you are going to get? And what advice do they give you? 5) The answer is "Yes." In fact, people will ask more than just "who was best for ADHD," the answer is "who did the best for ADHD." People respond almost as likely as other professionals who are more likely. The question to be asked "Is that a good way to help you better?" "Is it a good way to help you improve mental health in a non-psychiatric setting?" And what advice do clinicians give you about psychotherapy therapy? 6) Some therapists are skeptical. And as many as one third of people will tell you, "I have to tell you whether there's a medical professional you can help with when you look in the mirror" about all of these questions. And it's important to note that all of these individuals answer "yes" as many times as possible. There is really no way to test the reliability of these questions with accurate information or even have a clear objective answer that will answer all of these questions. 7) Some therapists are in denial about their own mental health problems. One of the reasons I am so critical of professional psychotherapy is to identify them as people who are going through a variety of mental health issues with different mental health problems. These people are often struggling with addiction and are sometimes in denial about what they have done and the way they have done and what they do. The same cannot be said about mental illness. 8) There is something wrong with talking about the individual for years. 9) If you say, "It is my responsibility to tell you. Do I want it as much as I can?" You may sound off on some of them, but do you know what can be done? Here are some helpful things: 1. The answer is "Don't talk to other people. ``` **Authors** - **Developed by:** Anezatra - **Model type:** GPT2 - **Contacts:** https://github.com/anezatra
{"datasets": ["Skylion007/openwebtext"], "pipeline_tag": "text-generation"}
anezatra/chat-gpt2
null
[ "transformers", "safetensors", "gpt2", "text-generation", "dataset:Skylion007/openwebtext", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T09:16:50+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #dataset-Skylion007/openwebtext #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# OpenAI ChatGPT-2 !examples ## Model description Generative Pre-trained Transformer 2 (GPT-2), developed by OpenAI, represents the second iteration in their foundational series of GPT models. GPT-2 embarked on its journey with a substantial dataset comprising 8 million web pages. Initially unveiled in February 2019, it reached its pinnacle with the full release of the 1.5-billion-parameter model on November 5, 2019. GPT-2 emerged as a direct evolution from its predecessor, GPT-1, boasting a tenfold augmentation in both parameter count and training dataset magnitude. Positioned as a versatile learner, its prowess across diverse tasks stemmed from its innate capacity to accurately prognosticate the subsequent item in a sequence. This predictive prowess endowed it with the capability to engage in text translation, answer inquiries derived from textual contexts, distill concise summaries from extensive passages, and produce text outputs rivalling human composition. Nonetheless, it occasionally exhibited tendencies towards repetitiveness or tangential incoherence, particularly when tasked with generating lengthy passages. Architecturally akin to its antecedent GPT-1 and progeny GPT-3 and GPT-4, GPT-2 features a generative pre-trained transformer architecture, underpinned by a deep neural network framework, specifically a transformer model. Departing from antiquated recurrence- and convolution-based designs, this architecture capitalizes on attention mechanisms. These mechanisms afford the model the capability to selectively concentrate on segments of input text deemed most pertinent. This transformative architectural paradigm facilitates enhanced parallelization, markedly surpassing preceding benchmarks established by RNN/CNN/LSTM-based models. ## Training The transformer architecture provides a capability that allows GPT models to be trained on larger datasets compared to previous NLP (natural language processing) models. The GPT-1 model demonstrated the validity of this approach; however, GPT-2 aimed to further investigate the emergent properties of networks trained on extremely large datasets. CommonCrawl, a large corpus previously used to train NLP systems, was considered due to its extensive size. However, further examination revealed that much of the content was unintelligible. Consequently, OpenAI developed a new dataset called WebText. Instead of indiscriminately scraping content from the World Wide Web, WebText collected content only from pages linked to by Reddit posts that had received at least three upvotes prior to December 2017. The dataset was then cleaned; HTML documents were parsed into plain text, duplicate pages were removed, and Wikipedia pages were excluded due to the risk of overfitting, as they were prevalent in many other datasets. Additionally, this model was retrained using the OpenWebText corpus by Anezatra. Utilizing DistilGPT, the model was aimed at reducing its size to create a lighter and more efficient version. The DistilGPT technique maintains the model's learning capabilities while reducing the number of parameters, thus speeding up training and inference processes and utilizing resources more efficiently. ## How to use ## Example Output Authors - Developed by: Anezatra - Model type: GPT2 - Contacts: URL
[ "# OpenAI ChatGPT-2\n\n!examples", "## Model description\n\nGenerative Pre-trained Transformer 2 (GPT-2), developed by OpenAI, represents the second iteration in their foundational series of GPT models. GPT-2 embarked on its journey with a substantial dataset comprising 8 million web pages. Initially unveiled in February 2019, it reached its pinnacle with the full release of the 1.5-billion-parameter model on November 5, 2019.\n\nGPT-2 emerged as a direct evolution from its predecessor, GPT-1, boasting a tenfold augmentation in both parameter count and training dataset magnitude. Positioned as a versatile learner, its prowess across diverse tasks stemmed from its innate capacity to accurately prognosticate the subsequent item in a sequence. This predictive prowess endowed it with the capability to engage in text translation, answer inquiries derived from textual contexts, distill concise summaries from extensive passages, and produce text outputs rivalling human composition. Nonetheless, it occasionally exhibited tendencies towards repetitiveness or tangential incoherence, particularly when tasked with generating lengthy passages.\n\nArchitecturally akin to its antecedent GPT-1 and progeny GPT-3 and GPT-4, GPT-2 features a generative pre-trained transformer architecture, underpinned by a deep neural network framework, specifically a transformer model. Departing from antiquated recurrence- and convolution-based designs, this architecture capitalizes on attention mechanisms. These mechanisms afford the model the capability to selectively concentrate on segments of input text deemed most pertinent. This transformative architectural paradigm facilitates enhanced parallelization, markedly surpassing preceding benchmarks established by RNN/CNN/LSTM-based models.", "## Training\n\nThe transformer architecture provides a capability that allows GPT models to be trained on larger datasets compared to previous NLP (natural language processing) models. The GPT-1 model demonstrated the validity of this approach; however, GPT-2 aimed to further investigate the emergent properties of networks trained on extremely large datasets. CommonCrawl, a large corpus previously used to train NLP systems, was considered due to its extensive size. However, further examination revealed that much of the content was unintelligible. Consequently, OpenAI developed a new dataset called WebText. Instead of indiscriminately scraping content from the World Wide Web, WebText collected content only from pages linked to by Reddit posts that had received at least three upvotes prior to December 2017. The dataset was then cleaned; HTML documents were parsed into plain text, duplicate pages were removed, and Wikipedia pages were excluded due to the risk of overfitting, as they were prevalent in many other datasets. Additionally, this model was retrained using the OpenWebText corpus by Anezatra. Utilizing DistilGPT, the model was aimed at reducing its size to create a lighter and more efficient version. The DistilGPT technique maintains the model's learning capabilities while reducing the number of parameters, thus speeding up training and inference processes and utilizing resources more efficiently.", "## How to use", "## Example Output\n\n\n\nAuthors\n\n- Developed by: Anezatra\n- Model type: GPT2\n- Contacts: URL" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #dataset-Skylion007/openwebtext #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# OpenAI ChatGPT-2\n\n!examples", "## Model description\n\nGenerative Pre-trained Transformer 2 (GPT-2), developed by OpenAI, represents the second iteration in their foundational series of GPT models. GPT-2 embarked on its journey with a substantial dataset comprising 8 million web pages. Initially unveiled in February 2019, it reached its pinnacle with the full release of the 1.5-billion-parameter model on November 5, 2019.\n\nGPT-2 emerged as a direct evolution from its predecessor, GPT-1, boasting a tenfold augmentation in both parameter count and training dataset magnitude. Positioned as a versatile learner, its prowess across diverse tasks stemmed from its innate capacity to accurately prognosticate the subsequent item in a sequence. This predictive prowess endowed it with the capability to engage in text translation, answer inquiries derived from textual contexts, distill concise summaries from extensive passages, and produce text outputs rivalling human composition. Nonetheless, it occasionally exhibited tendencies towards repetitiveness or tangential incoherence, particularly when tasked with generating lengthy passages.\n\nArchitecturally akin to its antecedent GPT-1 and progeny GPT-3 and GPT-4, GPT-2 features a generative pre-trained transformer architecture, underpinned by a deep neural network framework, specifically a transformer model. Departing from antiquated recurrence- and convolution-based designs, this architecture capitalizes on attention mechanisms. These mechanisms afford the model the capability to selectively concentrate on segments of input text deemed most pertinent. This transformative architectural paradigm facilitates enhanced parallelization, markedly surpassing preceding benchmarks established by RNN/CNN/LSTM-based models.", "## Training\n\nThe transformer architecture provides a capability that allows GPT models to be trained on larger datasets compared to previous NLP (natural language processing) models. The GPT-1 model demonstrated the validity of this approach; however, GPT-2 aimed to further investigate the emergent properties of networks trained on extremely large datasets. CommonCrawl, a large corpus previously used to train NLP systems, was considered due to its extensive size. However, further examination revealed that much of the content was unintelligible. Consequently, OpenAI developed a new dataset called WebText. Instead of indiscriminately scraping content from the World Wide Web, WebText collected content only from pages linked to by Reddit posts that had received at least three upvotes prior to December 2017. The dataset was then cleaned; HTML documents were parsed into plain text, duplicate pages were removed, and Wikipedia pages were excluded due to the risk of overfitting, as they were prevalent in many other datasets. Additionally, this model was retrained using the OpenWebText corpus by Anezatra. Utilizing DistilGPT, the model was aimed at reducing its size to create a lighter and more efficient version. The DistilGPT technique maintains the model's learning capabilities while reducing the number of parameters, thus speeding up training and inference processes and utilizing resources more efficiently.", "## How to use", "## Example Output\n\n\n\nAuthors\n\n- Developed by: Anezatra\n- Model type: GPT2\n- Contacts: URL" ]
reinforcement-learning
stable-baselines3
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga fishtoby -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga fishtoby -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga fishtoby ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
{"library_name": "stable-baselines3", "tags": ["SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "DQN", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "SpaceInvadersNoFrameskip-v4", "type": "SpaceInvadersNoFrameskip-v4"}, "metrics": [{"type": "mean_reward", "value": "634.50 +/- 120.22", "name": "mean_reward", "verified": false}]}]}]}
fishtoby/dqn-SpaceInvadersNoFrameskip-v4
null
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-20T09:19:13+00:00
[]
[]
TAGS #stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# DQN Agent playing SpaceInvadersNoFrameskip-v4 This is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4 using the stable-baselines3 library and the RL Zoo. The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: URL SB3: URL SB3 Contrib: URL Install the RL Zoo (with SB3 and SB3-Contrib): If you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do: ## Training (with the RL Zoo) ## Hyperparameters # Environment Arguments
[ "# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.", "## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:", "## Training (with the RL Zoo)", "## Hyperparameters", "# Environment Arguments" ]
[ "TAGS\n#stable-baselines3 #SpaceInvadersNoFrameskip-v4 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# DQN Agent playing SpaceInvadersNoFrameskip-v4\nThis is a trained model of a DQN agent playing SpaceInvadersNoFrameskip-v4\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.", "## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL\n\nInstall the RL Zoo (with SB3 and SB3-Contrib):\n\n\n\n\nIf you installed the RL Zoo3 via pip ('pip install rl_zoo3'), from anywhere you can do:", "## Training (with the RL Zoo)", "## Hyperparameters", "# Environment Arguments" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-car0007-20240420 This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0203 - Accuracy: 0.9939 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1477 | 1.0 | 127 | 0.0922 | 0.9729 | | 0.0696 | 2.0 | 254 | 0.0396 | 0.9889 | | 0.0525 | 2.99 | 381 | 0.0463 | 0.9833 | | 0.0366 | 4.0 | 509 | 0.0232 | 0.9941 | | 0.0343 | 4.99 | 635 | 0.0203 | 0.9939 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-tiny-patch4-window7-224", "model-index": [{"name": "swin-tiny-patch4-window7-224-finetuned-car0007-20240420", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "train", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9938514510575505, "name": "Accuracy"}]}]}]}
tsware/swin-tiny-patch4-window7-224-finetuned-car0007-20240420
null
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:20:57+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
swin-tiny-patch4-window7-224-finetuned-car0007-20240420 ======================================================= This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 0.0203 * Accuracy: 0.9939 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1330 - F1: 0.8649 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2553 | 1.0 | 525 | 0.1535 | 0.8208 | | 0.1269 | 2.0 | 1050 | 0.1329 | 0.8494 | | 0.0783 | 3.0 | 1575 | 0.1330 | 0.8649 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.1
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["f1"], "base_model": "xlm-roberta-base", "model-index": [{"name": "xlm-roberta-base-finetuned-panx-de", "results": []}]}
mshirae3/xlm-roberta-base-finetuned-panx-de
null
[ "transformers", "safetensors", "xlm-roberta", "token-classification", "generated_from_trainer", "base_model:xlm-roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:24:29+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
xlm-roberta-base-finetuned-panx-de ================================== This model is a fine-tuned version of xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1330 * F1: 0.8649 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 24 * eval\_batch\_size: 24 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2 * Datasets 2.18.0 * Tokenizers 0.15.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.1" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #token-classification #generated_from_trainer #base_model-xlm-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 24\n* eval\\_batch\\_size: 24\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.1" ]
text-generation
peft
## Parameters ``` batch_size: 1 data_parameters: - dataset_config: input_features: - NEWSROOM - ARTICLE_TITLE - ARTICLE_TEXT_ALL shuffle_input_features: false shuffle_trainable_features: false trainable_features: - SUMMARY trunc_feature: ARTICLE_TEXT_ALL dataset_directory: data/article_summaries early_stopping_patience_epochs: 5 grad_batch_size: 64 learning_rate: 0.0001 load_in_4bit: true load_in_8bit: false log_every_n_steps: 10 lora_alpha: 64 lora_dim: 64 lora_dropout: 0.1 lora_target_modules: - q_proj - v_proj - k_proj - o_proj max_epochs: 100 max_token_len: 1600 model_name: NorLLM-AI/NorMixtral-8x7B name: article2summary-NorLLM-AINorMixtral-8x7B-Melvin num_samples_per_dataset: 4 num_workers: 0 precision: 16-mixed strategy: deepspeed_stage_2 task_name: article2summary val_check_interval: 0.5 weight_decay: 0.01 ```
{"library_name": "peft", "base_model": "NorLLM-AI/NorMixtral-8x7B", "pipeline_tag": "text-generation", "widget": [{"text": "[NEWSROOM] vg [ARTICLE_TITLE] Timon Haugan tok karrierens f\u00f8rste verdenscupseier: \u2013 Helt uvirkelig [ARTICLE_TEXT_ALL] F\u00f8lelsene tok overh\u00e5nd etter Norges f\u00f8rste seier p\u00e5 herresiden denne sesongen. T\u00e5rev\u00e5te lagkamerater stormet bort og omfavnet Timon Haugan etter at han forsvarte ledelsen med en enorm slal\u00e5momgang i \u00f8sterrikske Saalbach. \u2013 Jeg er s\u00e5 glad p\u00e5 Timons vegne. Det er veldig spesielt. Jeg driter i hva som skjedde med meg i dag. N\u00e5r man ser noen jobbe s\u00e5 hardt s\u00e5 lenge, og s\u00e5 f\u00e5r de det til ... Idrettsglede, rett og slett, sa en gr\u00e5tkvalt Atle Lie McGrath til NRK. 27 \u00e5r gamle Haugan slo \u00f8sterrikske Manuel Feller p\u00e5 hjemmebane med 40 hundredeler og tyske Linus Strasser med 44 hundredeler. Alexander Steen Olsen klarte heller ikke holde t\u00e5rene tilbake da han l\u00f8ftet Haugan opp p\u00e5 gullstol sammen med McGrath. \u2013 Det f\u00f8les helt sinnssykt. Jeg har ikke ord. Jeg har tenkt mye p\u00e5 n\u00e5r seieren skulle komme, sier Haugan til NRK. \u2013 Det var helt uvirkelig og utrolig deilig \u00e5 krysse m\u00e5l og se gr\u00f8nt. Det var s\u00e5 mye vekt bort fra skuldrene. Den lettelsen var helt enorm, sier han til VG. Haugan gikk ikke bare til topps for f\u00f8rste gang i karrieren. Han s\u00f8rget ogs\u00e5 for Norges f\u00f8rste alpinseier i verdenscupen denne sesongen. \u2013 Vi har f\u00e5tt en del sp\u00f8rsm\u00e5l om at vi ikke har hatt noen seirer p\u00e5 herresiden gjennom sesongen. Det har vi v\u00e6rt bortskjemte med tidligere. Jeg har uten tvil kjent litt p\u00e5 det og f\u00f8lte p\u00e5 litt ekstra ansvar i dag. Lucas Braathen vant slal\u00e5mcupen i fjor, men valgte \u00e5 gi seg f\u00f8r sesongen. Nylig ble det klart at han gj\u00f8r comeback for Brasil. Haugans triumf gj\u00f8r at Norge har 36 strake sesonger i verdenscupen med minst \u00e9n seier p\u00e5 herresiden. \u2013 Helt fantastisk. Vi har jobbet som et lag hele sesongen for \u00e5 klare \u00e5 ta seier, da blir det stort, sier McGrath, som selv kj\u00f8rte ut i finaleomgangen. Frustrasjonen herjet l\u00f8rdag: Haugan gikk offensivt ut fra start i f\u00f8rste omgang og hadde gr\u00f8nne tall. Deretter tapte han noen hundredeler midtveis i l\u00f8ypen f\u00f8r han avsluttet sterkt og kunne sette seg i lederstolen i \u00f8sterrikske Saalbach-Hinterglemm. \u2013 For en omgang av Timon Haugan, ropte NRK-kommentator Carl Andreas Wold. Haugan hadde fra f\u00f8r av to pallplasseringer i verdenscupen denne sesongen. Han ble nummer tre i storslal\u00e5mrennet i Aspen og nummer to i slal\u00e5men i Schladming. Henrik Kristoffersen l\u00e5 p\u00e5 en syvendeplass etter f\u00f8rsteomgangen, men kj\u00f8rte str\u00e5lende i finalen og tok ledelsen med fem konkurrenter igjen. Nordmannen fikk derimot ikke beholde ledelsen lenge, da nestemann Marc Rochat var 15 hundredeler bedre. Deretter tok Manuel Feller ledelsen p\u00e5 hjemmebane med 52 hundredeler ned til Rochat, f\u00f8r Haugan forsvarte ledelsen fra f\u00f8rste omgang med glimrende kj\u00f8ring. Linus Strasser falt \u00e9n plass etter f\u00f8rste omgang. Kristoffersen ble nummer syv. Alexander Steen Olsen var rasende etter l\u00f8rdagens storslal\u00e5m og fant ikke helt rytmen slal\u00e5mens f\u00f8rsteomgang. Han var 3,83 sekunder bak Haugan i m\u00e5l og l\u00e5 som nummer 21 f\u00f8r andre omgang. Med tidlig startnummer i finalen gikk det bedre for 22-\u00e5ringen som klatret p\u00e5 listen. Han ble til slutt nummer elleve. Dette er det siste slal\u00e5mrennet i verdenscupen denne vinteren. Neste helg avsluttes alpinsesongen i Saalbach med super-G og utfor. Tidligere i mars ble Timon Haugan nummer tre i storslal\u00e5mrennet i Aspen: [SUMMARY]", "output": {"text": " ['Timon Haugan tok Norges f\u00f8rste alpinseier i verdenscupen denne sesongen i slal\u00e5mfinalen i \u00f8sterrikske Saalbach.', 'Dette var Haugans f\u00f8rste seier i karrieren, som sikret 36 strake sesonger med minst \u00e9n norsk seier p\u00e5 herresiden.', 'Haugan slo \u00f8sterrikske Manuel Feller med 40 hundredeler og tyske Linus Strasser med 44 hundredeler.', 'Lagkameratene Atle Lie McGrath og Alexander Steen Olsen klarte ikke holde t\u00e5rene tilbake etter Haugans seier.'] "}}, {"text": "[NEWSROOM] vg [ARTICLE_TITLE] Natos nye forsvarsplaner: Slik p\u00e5virker det Norge [ARTICLE_TEXT_ALL] Natos nye forsvarsplaner vil kreve mer av Norge, if\u00f8lge statsminister Jonas Gahr St\u00f8re (Ap). \u2013 Neste ukes Nato-toppm\u00f8te i Vilnius blir det viktigste i v\u00e5r tid. Det blir et historisk toppm\u00f8te, sier St\u00f8re til VG. \u2013 Vi skal ta beslutninger om en fullstendig omorganisering av Nato, tilpasset den situasjonen vi st\u00e5r i n\u00e5, og vil st\u00e5 i lenge fremover. Det legges nye, regionale forsvarsplaner. Land som f\u00f8ler seg utsatt, vil f\u00e5 styrker utplassert. Og vi skal inkludere Finland og etter hvert Sverige i planene, legger han til. Mandag reiser han til Litauen sammen med utenriksminister Anniken Huitfeldt (Ap) og forsvarsminister Bj\u00f8rn Arild Gram (Sp). VG har tidligere omtalt innholdet i de nye forsvarsplanene. Det er fortsatt uavklart om Tyrkia vil slippe Sverige inn i alliansen n\u00e5. St\u00f8re sier imidlertid at det uansett bare er et sp\u00f8rsm\u00e5l om tid f\u00f8r det skjer. For ett \u00e5r siden var ogs\u00e5 St\u00f8re optimist p\u00e5 Sveriges vegne. Her kan du lese hva han sa foran det forrige toppm\u00f8tet i Madrid i fjor. Ser p\u00e5 milit\u00e6re behov St\u00f8re har tidligere sagt at Sverige og Finland i Nato \u00e5pner for et tettere nordisk forsvarssamarbeid. Men Norge m\u00e5 ogs\u00e5 komme Sverige og Finland til unnsetning p\u00e5 andre m\u00e5ter: \u2013 Veldig mye av v\u00e5re veier og samferdsel g\u00e5r mellom nord og s\u00f8r. N\u00e5r Sverige og Finland skal forsterkes via norske havner og flyplasser, m\u00e5 vi se hva som mangler. De milit\u00e6re behovene blir en del av den nasjonale transportplanen som vi skal sende til Stortinget neste \u00e5r, sier St\u00f8re. \u2013 Hva betyr det konkret? \u2013 Forsvaret m\u00e5 kartlegge sine kritiske behov. Men det kan v\u00e6re \u00e5 forsterke knutepunkter for jernbanen, forsterkning av veier og bruer som skal t\u00e5le frakt av stridsvogner, og tilf\u00f8rselsveier til enkelte flyplasser, sier han. Milit\u00e6r mobilitet Forsvarssjefene i Norden har blinket ut fire n\u00f8kkelhavner som kan ta imot allierte forsterkninger fra USA og Canada, og sende dem videre: Ofotfjorden, Trondheimsfjorden, G\u00f6teborg-regionen og Esbjerg havn i Danmark. Finland er allerede i gang med \u00e5 forsterke jernbanebruene over Torne \u00e4lv, grenseelven mellom Sverige og Finland i nord. Nato kaller det for milit\u00e6r mobilitet: Tungt milit\u00e6rt materiell skal raskt kunne flyttes over store avstander og over nasjonale grenser. M\u00e5 oppgradere planer St\u00f8re sier at det ligger s\u00e5rbarheter ogs\u00e5 p\u00e5 andre omr\u00e5der; Alarmen gikk i Nato da gassr\u00f8rledningene i \u00d8stersj\u00f8en ble sprengt i fjor sommer. Da mobiliserte en rekke land styrker og skip for \u00e5 passe bedre p\u00e5 kabler under vann i Nordsj\u00f8en, blant annet. Norge og Tyskland ba Nato opprette en permanent stab for \u00e5 beskytte infrastruktur p\u00e5 havbunnen. Det skal ogs\u00e5 besluttes formelt i Vilnius. \u2013 Vi har brukt det siste \u00e5ret til \u00e5 g\u00e5 gjennom v\u00e5re kriseplaner p\u00e5 omr\u00e5der som klima, v\u00e6r og sammensatte trusler. Og vi ser at det trengs en oppgradering av planene p\u00e5 nesten alle omr\u00e5der, sier St\u00f8re. Krig var ikke utelukket \u2013 Mye av tenkningen om forsvar i Nato n\u00e5, minner om ideen bak det norske n\u00e6romr\u00e5de-initiativet fra 2008, sier St\u00f8re. Da var han utenriksminister og Jens Stoltenberg var statsminister. \u2013 Vi argumenterte for at Nato m\u00e5tte komme hjem igjen. Da hadde soldatene bedre kunnskap om fjellene i Afghanistan enn om kysten av Atlanterhavet, for \u00e5 sette det p\u00e5 spissen. Vi mente at krig i Europa ikke kunne utelukkes og at vi m\u00e5tte v\u00e6re forberedt, sier han. [SUMMARY]", "output": {"text": " ['Nye forsvarsplaner for Nato vil kreve mer enn bare forsvarsbudsjett fra Norge.', 'If\u00f8lge St\u00f8re er Nato-toppm\u00f8tet i Vilnius neste uke, det viktigste i v\u00e5r tid. Der kommer beslutninger om omorganisering av Natos forsvar.', 'St\u00f8re vil reise til Litauen sammen med utenriksminister Anniken Huitfeldt og forsvarsminister Bj\u00f8rn Arild Gram.', 'Nato-konseptet med milit\u00e6r mobilitet inneb\u00e6rer rask flytting av tungt milit\u00e6rt materiell over store avstander og over nasjonale grenser.', 'If\u00f8lge St\u00f8re krever komplekse trusler, som klima- og v\u00e6rrelaterte utfordringer, oppgradering av kriseplaner.'] "}}]}
sch-ai/article2summary-NorLLM-AINorMixtral-8x7B-Melvin
null
[ "peft", "tensorboard", "safetensors", "text-generation", "base_model:NorLLM-AI/NorMixtral-8x7B", "region:us" ]
null
2024-04-20T09:24:31+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #text-generation #base_model-NorLLM-AI/NorMixtral-8x7B #region-us
## Parameters
[ "## Parameters" ]
[ "TAGS\n#peft #tensorboard #safetensors #text-generation #base_model-NorLLM-AI/NorMixtral-8x7B #region-us \n", "## Parameters" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7872 - Accuracy: 0.9206 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 318 | 3.2931 | 0.7255 | | 3.8009 | 2.0 | 636 | 1.8849 | 0.8526 | | 3.8009 | 3.0 | 954 | 1.1702 | 0.8897 | | 1.7128 | 4.0 | 1272 | 0.8717 | 0.9145 | | 0.9206 | 5.0 | 1590 | 0.7872 | 0.9206 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": []}]}
vantaa32/distilbert-base-uncased-finetuned-clinc
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:24:48+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-uncased-finetuned-clinc ======================================= This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.7872 * Accuracy: 0.9206 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 48 * eval\_batch\_size: 48 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 48\n* eval\\_batch\\_size: 48\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
null
peft
## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
{"library_name": "peft"}
arya123321/Recipe_Generator
null
[ "peft", "region:us" ]
null
2024-04-20T09:28:00+00:00
[]
[]
TAGS #peft #region-us
## Training procedure The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0
[ "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.4.0" ]
[ "TAGS\n#peft #region-us \n", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n\n- PEFT 0.4.0" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
bdsaglam/llama-2-7b-chat-jerx-debug-peft-9k7czwyz
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:28:01+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Meta-Llama-3-8B-Instruct_fictional_Chinese_v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 36 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "other", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "Meta-Llama-3-8B-Instruct_fictional_Chinese_v1", "results": []}]}
yzhuang/Meta-Llama-3-8B-Instruct_fictional_chinese_v1
null
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T09:30:39+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #llama #text-generation #trl #sft #generated_from_trainer #conversational #dataset-generator #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Meta-Llama-3-8B-Instruct_fictional_Chinese_v1 This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 36 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# Meta-Llama-3-8B-Instruct_fictional_Chinese_v1\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 36", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #llama #text-generation #trl #sft #generated_from_trainer #conversational #dataset-generator #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Meta-Llama-3-8B-Instruct_fictional_Chinese_v1\n\nThis model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 36", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.2.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # micro_base_help_tapt_pretrain_model This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5916 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 21 - eval_batch_size: 21 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.9109 | 0.99 | 40 | 1.6849 | | 1.7421 | 2.0 | 81 | 1.6620 | | 1.7411 | 2.99 | 121 | 1.6333 | | 1.6441 | 4.0 | 162 | 1.6306 | | 1.6337 | 4.99 | 202 | 1.6137 | | 1.5774 | 6.0 | 243 | 1.6343 | | 1.5997 | 6.99 | 283 | 1.5931 | | 1.5196 | 8.0 | 324 | 1.6018 | | 1.5416 | 8.99 | 364 | 1.5994 | | 1.4819 | 10.0 | 405 | 1.5886 | | 1.5079 | 10.99 | 445 | 1.5938 | | 1.455 | 12.0 | 486 | 1.5699 | | 1.4718 | 12.99 | 526 | 1.5947 | | 1.4157 | 14.0 | 567 | 1.5920 | | 1.4369 | 14.99 | 607 | 1.5879 | | 1.3733 | 16.0 | 648 | 1.5745 | | 1.4017 | 16.99 | 688 | 1.6000 | | 1.3601 | 18.0 | 729 | 1.5830 | | 1.3602 | 18.99 | 769 | 1.5846 | | 1.3152 | 20.0 | 810 | 1.5940 | | 1.3437 | 20.99 | 850 | 1.5942 | | 1.2904 | 22.0 | 891 | 1.5787 | ### Framework versions - Transformers 4.36.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "roberta-base", "model-index": [{"name": "micro_base_help_tapt_pretrain_model", "results": []}]}
BigTMiami/micro_base_help_tapt_pretrain_model
null
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:31:45+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
micro\_base\_help\_tapt\_pretrain\_model ======================================== This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.5916 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 21 * eval\_batch\_size: 21 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 42 * optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 * lr\_scheduler\_type: linear * num\_epochs: 100 ### Training results ### Framework versions * Transformers 4.36.2 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 21\n* eval\\_batch\\_size: 21\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 21\n* eval\\_batch\\_size: 21\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 100", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
null
transformers
# Uploaded model - **Developed by:** dattaraj - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
dattaraj/llama3-8b-PubMedQA-FineTuned
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:34:32+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: dattaraj - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: dattaraj\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: dattaraj\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
PilliSiddharth/mistral_b_medical_code
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:35:48+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# UpshotLlama-3-8B This is an ORPO fine-tune of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on 2k sample of dpo_math_data from [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k). It's a successful fine-tune that follows the ChatML template! ## 🔎 Application This model uses a context window of 8k. It was trained with the ChatML template. ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Aditya685/UpshotLlama-3-8B" messages = [{"role": "user", "content": "Given the equation 4x + 7 = 55. Find the value of x"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["orpo", "llama 3", "rlhf", "sft"], "datasets": ["mlabonne/orpo-dpo-mix-40k"]}
Aditya685/UpshotLlama-3-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "orpo", "llama 3", "rlhf", "sft", "conversational", "en", "dataset:mlabonne/orpo-dpo-mix-40k", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T09:37:18+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #orpo #llama 3 #rlhf #sft #conversational #en #dataset-mlabonne/orpo-dpo-mix-40k #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# UpshotLlama-3-8B This is an ORPO fine-tune of meta-llama/Meta-Llama-3-8B on 2k sample of dpo_math_data from mlabonne/orpo-dpo-mix-40k. It's a successful fine-tune that follows the ChatML template! ## Application This model uses a context window of 8k. It was trained with the ChatML template. ## Usage
[ "# UpshotLlama-3-8B\n\nThis is an ORPO fine-tune of meta-llama/Meta-Llama-3-8B on 2k sample of dpo_math_data from mlabonne/orpo-dpo-mix-40k.\n\nIt's a successful fine-tune that follows the ChatML template!", "## Application\n\nThis model uses a context window of 8k. It was trained with the ChatML template.", "## Usage" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #orpo #llama 3 #rlhf #sft #conversational #en #dataset-mlabonne/orpo-dpo-mix-40k #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# UpshotLlama-3-8B\n\nThis is an ORPO fine-tune of meta-llama/Meta-Llama-3-8B on 2k sample of dpo_math_data from mlabonne/orpo-dpo-mix-40k.\n\nIt's a successful fine-tune that follows the ChatML template!", "## Application\n\nThis model uses a context window of 8k. It was trained with the ChatML template.", "## Usage" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
IntervitensInc/intv_l3_mk6
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-20T09:39:15+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<img src=https://huggingface.co/lodrick-the-lafted/Copus-2x8B/resolve/main/copus.png> MoE'd up: - [dreamgen/opus-v1.2-llama-3-8b](https://huggingface.co/dreamgen/opus-v1.2-llama-3-8b) - [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
{"license": "llama2"}
blockblockblock/Copus-2x8B-bpw6
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "6-bit", "region:us" ]
null
2024-04-20T09:40:19+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us
<img src=URL MoE'd up: - dreamgen/opus-v1.2-llama-3-8b - NousResearch/Meta-Llama-3-8B-Instruct_ Which were the two most interesting llama3 finetunes as of yet. Resulting model seems OK. It's not on Miqu's level, anyway. Blah, blah, llama 3 license (no tag for it yet). Also not going to name my model Llama-3-Copus. Come at me, Zuck.
[]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #6-bit #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
zandfj/LLaMA2-7B-Chat-dpo-aftersft_200-042017
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:40:24+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
Niggendar/realAlicemixV1_v10
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-20T09:41:44+00:00
[ "1910.09700" ]
[]
TAGS #diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#diffusers #safetensors #arxiv-1910.09700 #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a diffusers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
feature-extraction
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_bge_ver23 This model is a fine-tuned version of [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "BAAI/bge-m3", "model-index": [{"name": "finetuned_bge_ver23", "results": []}]}
comet24082002/finetuned_bge_ver23
null
[ "transformers", "tensorboard", "safetensors", "xlm-roberta", "feature-extraction", "generated_from_trainer", "base_model:BAAI/bge-m3", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-20T09:41:59+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us
# finetuned_bge_ver23 This model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 64 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# finetuned_bge_ver23\n\nThis model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 30.0\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #xlm-roberta #feature-extraction #generated_from_trainer #base_model-BAAI/bge-m3 #license-mit #endpoints_compatible #region-us \n", "# finetuned_bge_ver23\n\nThis model is a fine-tuned version of BAAI/bge-m3 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 2\n- total_train_batch_size: 64\n- total_eval_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 30.0\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]