pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
listlengths
0
201
languages
listlengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
listlengths
0
722
processed_texts
listlengths
1
723
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Yasusan/Llama2_0409_v2
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-12T14:25:50+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Yasusan/Llama2_0412
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T14:26:27+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Morgan711 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2535 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.4941 | 1.0 | 23694 | 2.4007 | | 2.3279 | 2.0 | 47388 | 2.2823 | | 2.2786 | 3.0 | 71082 | 2.2535 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilgpt2", "model-index": [{"name": "Morgan711", "results": []}]}
Morgan711/Morgan711
null
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:distilgpt2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T14:29:02+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Morgan711 ========= This model is a fine-tuned version of distilgpt2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.2535 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
image-segmentation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b2-fashion This model is a fine-tuned version of [nvidia/mit-b2](https://huggingface.co/nvidia/mit-b2) on the sayeed99/fashion_segmentation dataset. ```python from transformers import SegformerImageProcessor, AutoModelForSemanticSegmentation from PIL import Image import requests import matplotlib.pyplot as plt import torch.nn as nn processor = SegformerImageProcessor.from_pretrained("sayeed99/segformer-b2-fashion") model = AutoModelForSemanticSegmentation.from_pretrained("sayeed99/segformer-b2-fashion") url = "https://plus.unsplash.com/premium_photo-1673210886161-bfcc40f54d1f?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8cGVyc29uJTIwc3RhbmRpbmd8ZW58MHx8MHx8&w=1000&q=80" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits.cpu() upsampled_logits = nn.functional.interpolate( logits, size=image.size[::-1], mode="bilinear", align_corners=False, ) pred_seg = upsampled_logits.argmax(dim=1)[0] plt.imshow(pred_seg) ``` Labels : {"0":"Everything Else", "1": "shirt, blouse", "2": "top, t-shirt, sweatshirt", "3": "sweater", "4": "cardigan", "5": "jacket", "6": "vest", "7": "pants", "8": "shorts", "9": "skirt", "10": "coat", "11": "dress", "12": "jumpsuit", "13": "cape", "14": "glasses", "15": "hat", "16": "headband, head covering, hair accessory", "17": "tie", "18": "glove", "19": "watch", "20": "belt", "21": "leg warmer", "22": "tights, stockings", "23": "sock", "24": "shoe", "25": "bag, wallet", "26": "scarf", "27": "umbrella", "28": "hood", "29": "collar", "30": "lapel", "31": "epaulette", "32": "sleeve", "33": "pocket", "34": "neckline", "35": "buckle", "36": "zipper", "37": "applique", "38": "bead", "39": "bow", "40": "flower", "41": "fringe", "42": "ribbon", "43": "rivet", "44": "ruffle", "45": "sequin", "46": "tassel"} ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3 ### License The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE). ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {https://arxiv.org/abs/2105.15203}, eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2105-15203.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} }
{"license": "other", "tags": ["vision", "image-segmentation", "generated_from_trainer"], "datasets": ["sayeed99/fashion_segmentation"], "widget": [{"src": "https://media.istockphoto.com/id/515788534/photo/cheerful-and-confidant.jpg?s=612x612&w=0&k=20&c=T0Z4DfameRpyGhzevPomrm-wjZp7wmGjpAyjGcTzpkA=", "example_title": "Person"}, {"src": "https://storage.googleapis.com/pai-images/1484fd9ea9d746eb9f1de0d6778dbea2.jpeg", "example_title": "Person"}], "pipeline_tag": "image-segmentation", "model-index": [{"name": "segformer-b2-fashion", "results": []}]}
sayeed99/segformer-b2-fashion
null
[ "transformers", "pytorch", "safetensors", "segformer", "vision", "image-segmentation", "generated_from_trainer", "dataset:sayeed99/fashion_segmentation", "arxiv:2105.15203", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-12T14:30:46+00:00
[ "2105.15203" ]
[]
TAGS #transformers #pytorch #safetensors #segformer #vision #image-segmentation #generated_from_trainer #dataset-sayeed99/fashion_segmentation #arxiv-2105.15203 #license-other #endpoints_compatible #region-us
# segformer-b2-fashion This model is a fine-tuned version of nvidia/mit-b2 on the sayeed99/fashion_segmentation dataset. Labels : {"0":"Everything Else", "1": "shirt, blouse", "2": "top, t-shirt, sweatshirt", "3": "sweater", "4": "cardigan", "5": "jacket", "6": "vest", "7": "pants", "8": "shorts", "9": "skirt", "10": "coat", "11": "dress", "12": "jumpsuit", "13": "cape", "14": "glasses", "15": "hat", "16": "headband, head covering, hair accessory", "17": "tie", "18": "glove", "19": "watch", "20": "belt", "21": "leg warmer", "22": "tights, stockings", "23": "sock", "24": "shoe", "25": "bag, wallet", "26": "scarf", "27": "umbrella", "28": "hood", "29": "collar", "30": "lapel", "31": "epaulette", "32": "sleeve", "33": "pocket", "34": "neckline", "35": "buckle", "36": "zipper", "37": "applique", "38": "bead", "39": "bow", "40": "flower", "41": "fringe", "42": "ribbon", "43": "rivet", "44": "ruffle", "45": "sequin", "46": "tassel"} ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3 ### License The license for this model can be found here. ### BibTeX entry and citation info '''bibtex @article{DBLP:journals/corr/abs-2105-15203, author = {Enze Xie and Wenhai Wang and Zhiding Yu and Anima Anandkumar and Jose M. Alvarez and Ping Luo}, title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers}, journal = {CoRR}, volume = {abs/2105.15203}, year = {2021}, url = {URL eprinttype = {arXiv}, eprint = {2105.15203}, timestamp = {Wed, 02 Jun 2021 11:46:42 +0200}, biburl = {URL bibsource = {dblp computer science bibliography, URL} }
[ "# segformer-b2-fashion\n\nThis model is a fine-tuned version of nvidia/mit-b2 on the sayeed99/fashion_segmentation dataset.\n\n\n\n\nLabels : {\"0\":\"Everything Else\", \"1\": \"shirt, blouse\", \"2\": \"top, t-shirt, sweatshirt\", \"3\": \"sweater\", \"4\": \"cardigan\", \"5\": \"jacket\", \"6\": \"vest\", \"7\": \"pants\", \"8\": \"shorts\", \"9\": \"skirt\", \"10\": \"coat\", \"11\": \"dress\", \"12\": \"jumpsuit\", \"13\": \"cape\", \"14\": \"glasses\", \"15\": \"hat\", \"16\": \"headband, head covering, hair accessory\", \"17\": \"tie\", \"18\": \"glove\", \"19\": \"watch\", \"20\": \"belt\", \"21\": \"leg warmer\", \"22\": \"tights, stockings\", \"23\": \"sock\", \"24\": \"shoe\", \"25\": \"bag, wallet\", \"26\": \"scarf\", \"27\": \"umbrella\", \"28\": \"hood\", \"29\": \"collar\", \"30\": \"lapel\", \"31\": \"epaulette\", \"32\": \"sleeve\", \"33\": \"pocket\", \"34\": \"neckline\", \"35\": \"buckle\", \"36\": \"zipper\", \"37\": \"applique\", \"38\": \"bead\", \"39\": \"bow\", \"40\": \"flower\", \"41\": \"fringe\", \"42\": \"ribbon\", \"43\": \"rivet\", \"44\": \"ruffle\", \"45\": \"sequin\", \"46\": \"tassel\"}", "### Framework versions\n\n- Transformers 4.30.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.13.3", "### License\n\nThe license for this model can be found here.", "### BibTeX entry and citation info\n\n'''bibtex\n@article{DBLP:journals/corr/abs-2105-15203,\n author = {Enze Xie and\n Wenhai Wang and\n Zhiding Yu and\n Anima Anandkumar and\n Jose M. Alvarez and\n Ping Luo},\n title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with\n Transformers},\n journal = {CoRR},\n volume = {abs/2105.15203},\n year = {2021},\n url = {URL\n eprinttype = {arXiv},\n eprint = {2105.15203},\n timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},\n biburl = {URL\n bibsource = {dblp computer science bibliography, URL}\n}" ]
[ "TAGS\n#transformers #pytorch #safetensors #segformer #vision #image-segmentation #generated_from_trainer #dataset-sayeed99/fashion_segmentation #arxiv-2105.15203 #license-other #endpoints_compatible #region-us \n", "# segformer-b2-fashion\n\nThis model is a fine-tuned version of nvidia/mit-b2 on the sayeed99/fashion_segmentation dataset.\n\n\n\n\nLabels : {\"0\":\"Everything Else\", \"1\": \"shirt, blouse\", \"2\": \"top, t-shirt, sweatshirt\", \"3\": \"sweater\", \"4\": \"cardigan\", \"5\": \"jacket\", \"6\": \"vest\", \"7\": \"pants\", \"8\": \"shorts\", \"9\": \"skirt\", \"10\": \"coat\", \"11\": \"dress\", \"12\": \"jumpsuit\", \"13\": \"cape\", \"14\": \"glasses\", \"15\": \"hat\", \"16\": \"headband, head covering, hair accessory\", \"17\": \"tie\", \"18\": \"glove\", \"19\": \"watch\", \"20\": \"belt\", \"21\": \"leg warmer\", \"22\": \"tights, stockings\", \"23\": \"sock\", \"24\": \"shoe\", \"25\": \"bag, wallet\", \"26\": \"scarf\", \"27\": \"umbrella\", \"28\": \"hood\", \"29\": \"collar\", \"30\": \"lapel\", \"31\": \"epaulette\", \"32\": \"sleeve\", \"33\": \"pocket\", \"34\": \"neckline\", \"35\": \"buckle\", \"36\": \"zipper\", \"37\": \"applique\", \"38\": \"bead\", \"39\": \"bow\", \"40\": \"flower\", \"41\": \"fringe\", \"42\": \"ribbon\", \"43\": \"rivet\", \"44\": \"ruffle\", \"45\": \"sequin\", \"46\": \"tassel\"}", "### Framework versions\n\n- Transformers 4.30.0\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.13.3", "### License\n\nThe license for this model can be found here.", "### BibTeX entry and citation info\n\n'''bibtex\n@article{DBLP:journals/corr/abs-2105-15203,\n author = {Enze Xie and\n Wenhai Wang and\n Zhiding Yu and\n Anima Anandkumar and\n Jose M. Alvarez and\n Ping Luo},\n title = {SegFormer: Simple and Efficient Design for Semantic Segmentation with\n Transformers},\n journal = {CoRR},\n volume = {abs/2105.15203},\n year = {2021},\n url = {URL\n eprinttype = {arXiv},\n eprint = {2105.15203},\n timestamp = {Wed, 02 Jun 2021 11:46:42 +0200},\n biburl = {URL\n bibsource = {dblp computer science bibliography, URL}\n}" ]
null
null
# ybelkada/TinyLlama-1.1B-Chat-v1.0-Q4_0-GGUF This model was converted to GGUF format from [`TinyLlama/TinyLlama-1.1B-Chat-v1.0`](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo ybelkada/TinyLlama-1.1B-Chat-v1.0-Q4_0-GGUF --model tinyllama-1.1b-chat-v1.0.Q4_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo ybelkada/TinyLlama-1.1B-Chat-v1.0-Q4_0-GGUF --model tinyllama-1.1b-chat-v1.0.Q4_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m tinyllama-1.1b-chat-v1.0.Q4_0.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["llama-cpp", "gguf-my-repo"], "datasets": ["cerebras/SlimPajama-627B", "bigcode/starcoderdata", "HuggingFaceH4/ultrachat_200k", "HuggingFaceH4/ultrafeedback_binarized"], "widget": [{"example_title": "Fibonacci (Python)", "messages": [{"role": "system", "content": "You are a chatbot who can help code!"}, {"role": "user", "content": "Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI."}]}]}
ybelkada/TinyLlama-1.1B-Chat-v1.0-Q4_0-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "en", "dataset:cerebras/SlimPajama-627B", "dataset:bigcode/starcoderdata", "dataset:HuggingFaceH4/ultrachat_200k", "dataset:HuggingFaceH4/ultrafeedback_binarized", "license:apache-2.0", "region:us" ]
null
2024-04-12T14:34:13+00:00
[]
[ "en" ]
TAGS #gguf #llama-cpp #gguf-my-repo #en #dataset-cerebras/SlimPajama-627B #dataset-bigcode/starcoderdata #dataset-HuggingFaceH4/ultrachat_200k #dataset-HuggingFaceH4/ultrafeedback_binarized #license-apache-2.0 #region-us
# ybelkada/TinyLlama-1.1B-Chat-v1.0-Q4_0-GGUF This model was converted to GGUF format from 'TinyLlama/TinyLlama-1.1B-Chat-v1.0' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# ybelkada/TinyLlama-1.1B-Chat-v1.0-Q4_0-GGUF\nThis model was converted to GGUF format from 'TinyLlama/TinyLlama-1.1B-Chat-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #en #dataset-cerebras/SlimPajama-627B #dataset-bigcode/starcoderdata #dataset-HuggingFaceH4/ultrachat_200k #dataset-HuggingFaceH4/ultrafeedback_binarized #license-apache-2.0 #region-us \n", "# ybelkada/TinyLlama-1.1B-Chat-v1.0-Q4_0-GGUF\nThis model was converted to GGUF format from 'TinyLlama/TinyLlama-1.1B-Chat-v1.0' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with hqq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo facebook/opt-125m installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install hqq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from hqq.engine.hf import HQQModelForCausalLM from hqq.models.hf.base import AutoHQQHFModel try: model = HQQModelForCausalLM.from_quantized("PrunaAI/facebook-opt-125m-HQQ-1bit-smashed", device_map='auto') except: model = AutoHQQHFModel.from_quantized("PrunaAI/facebook-opt-125m-HQQ-1bit-smashed") tokenizer = AutoTokenizer.from_pretrained("facebook/opt-125m") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-125m before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/facebook-opt-125m-HQQ-1bit-smashed
null
[ "transformers", "opt", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T14:34:20+00:00
[]
[]
TAGS #transformers #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with hqq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo facebook/opt-125m installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-125m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/opt-125m installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-125m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #opt #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with hqq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/opt-125m installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/opt-125m before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-v0.1"}
mille055/duke_chatbot0412_adapter
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-04-12T14:36:26+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
text-generation
transformers
Cooking assistant trained on my own dataset. Training code: https://github.com/jdecourval/chef-ai/
{"license": "apache-2.0"}
jdecourval/chef-ai
null
[ "transformers", "safetensors", "gguf", "llama", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T14:38:16+00:00
[]
[]
TAGS #transformers #safetensors #gguf #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Cooking assistant trained on my own dataset. Training code: URL
[]
[ "TAGS\n#transformers #safetensors #gguf #llama #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
video-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-kinetics-finetuned-data-no-yolo-colab This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3187 - Accuracy: 0.9237 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 2057 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5766 | 0.09 | 188 | 0.7504 | 0.7956 | | 0.4124 | 1.09 | 376 | 0.4797 | 0.8638 | | 0.104 | 2.09 | 564 | 0.5037 | 0.8583 | | 0.0652 | 3.09 | 752 | 0.5683 | 0.8556 | | 0.0821 | 4.09 | 940 | 0.4314 | 0.8992 | | 0.0015 | 5.09 | 1128 | 0.5824 | 0.8883 | | 0.0024 | 6.09 | 1316 | 0.4310 | 0.8883 | | 0.0007 | 7.09 | 1504 | 0.4375 | 0.8992 | | 0.0007 | 8.09 | 1692 | 0.3199 | 0.9183 | | 0.0006 | 9.09 | 1880 | 0.3189 | 0.9210 | | 0.0006 | 10.09 | 2057 | 0.3187 | 0.9237 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "MCG-NJU/videomae-base-finetuned-kinetics", "model-index": [{"name": "videomae-base-finetuned-kinetics-finetuned-data-no-yolo-colab", "results": []}]}
bluebird089/videomae-base-finetuned-kinetics-finetuned-data-no-yolo-colab
null
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base-finetuned-kinetics", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-12T14:38:37+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base-finetuned-kinetics #license-cc-by-nc-4.0 #endpoints_compatible #region-us
videomae-base-finetuned-kinetics-finetuned-data-no-yolo-colab ============================================================= This model is a fine-tuned version of MCG-NJU/videomae-base-finetuned-kinetics on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.3187 * Accuracy: 0.9237 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 9e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * training\_steps: 2057 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 9e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 2057", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base-finetuned-kinetics #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 9e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 2057", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="BecTome/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
BecTome/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-12T14:39:33+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
video-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-kinetics-finetuned-data-no-yolo-kaggle This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4866 - Accuracy: 0.9046 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 4125 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2716 | 0.09 | 376 | 0.6657 | 0.8147 | | 0.2248 | 1.09 | 752 | 0.5673 | 0.8065 | | 0.0106 | 2.09 | 1128 | 0.4900 | 0.8529 | | 0.0548 | 3.09 | 1504 | 0.5760 | 0.8719 | | 0.0203 | 4.09 | 1880 | 0.8674 | 0.8311 | | 0.0008 | 5.09 | 2256 | 0.5422 | 0.8692 | | 0.0004 | 6.09 | 2632 | 0.4938 | 0.8965 | | 0.0021 | 7.09 | 3008 | 0.7053 | 0.8583 | | 0.0002 | 8.09 | 3384 | 0.5675 | 0.8747 | | 0.0003 | 9.09 | 3760 | 0.4930 | 0.9046 | | 0.0003 | 10.09 | 4125 | 0.4866 | 0.9046 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "MCG-NJU/videomae-base-finetuned-kinetics", "model-index": [{"name": "videomae-base-finetuned-kinetics-finetuned-data-no-yolo-kaggle", "results": []}]}
bluebird089/videomae-base-finetuned-kinetics-finetuned-data-no-yolo-kaggle
null
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base-finetuned-kinetics", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-12T14:40:53+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base-finetuned-kinetics #license-cc-by-nc-4.0 #endpoints_compatible #region-us
videomae-base-finetuned-kinetics-finetuned-data-no-yolo-kaggle ============================================================== This model is a fine-tuned version of MCG-NJU/videomae-base-finetuned-kinetics on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.4866 * Accuracy: 0.9046 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 9e-05 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * training\_steps: 4125 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 9e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 4125", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base-finetuned-kinetics #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 9e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 4125", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo Voicelab/trurl-2-7b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/Voicelab-trurl-2-7b-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("Voicelab/trurl-2-7b") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model Voicelab/trurl-2-7b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/Voicelab-trurl-2-7b-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-12T14:42:00+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo Voicelab/trurl-2-7b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model Voicelab/trurl-2-7b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo Voicelab/trurl-2-7b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model Voicelab/trurl-2-7b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo Voicelab/trurl-2-7b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model Voicelab/trurl-2-7b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
text-to-image
diffusers
# Blue_pencil_v6 API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/7198529311712933101.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "bluepencilv6" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/bluepencilv6) Model link: [View model](https://modelslab.com/models/bluepencilv6) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "bluepencilv6", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
{"license": "creativeml-openrail-m", "tags": ["modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic"], "pinned": true}
stablediffusionapi/bluepencilv6
null
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-04-12T14:48:02+00:00
[]
[]
TAGS #diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
# Blue_pencil_v6 API Inference !generated from URL ## Get API Key Get API key from ModelsLab API, No Payment needed. Replace Key in below code, change model_id to "bluepencilv6" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs Try model for free: Generate Images Model link: View model View all models: View Models import requests import json url = "URL payload = URL({ "key": "your_api_key", "model_id": "bluepencilv6", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(URL) > Use this coupon code to get 25% off DMGG0RBN
[ "# Blue_pencil_v6 API Inference\n\n!generated from URL", "## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"bluepencilv6\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"bluepencilv6\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN" ]
[ "TAGS\n#diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n", "# Blue_pencil_v6 API Inference\n\n!generated from URL", "## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"bluepencilv6\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"bluepencilv6\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
heyllm234/sc28
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T14:48:20+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
subbu3/invoices
null
[ "transformers", "safetensors", "vision-encoder-decoder", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-12T14:48:59+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<img src="https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1/resolve/main/logo.png" alt="Zephyr 141B Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for Zephyr 141B-A35B Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 141B-A35B is the latest model in the series, and is a fine-tuned version of [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) that was trained using a novel alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691) with **7k instances** for **1.3 hours** on 4 nodes of 8 x H100s. ORPO does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO. To train Zephyr-141B-A35B, we used the [`argilla/distilabel-capybara-dpo-7k-binarized`](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized) preference dataset, which consists of synthetic, high-quality, multi-turn preferences that have been scored via LLMs. > [!NOTE] > This model was trained collaboratively between Argilla, KAIST, and Hugging Face ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Model type:** A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English. - **License:** Apache 2.0 - **Finetuned from model:** [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook - **Dataset:** https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized ## Performance Zephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [IFEval](https://arxiv.org/abs/2311.07911). The scores reported below were obtained using the [LightEval](https://github.com/huggingface/lighteval) evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. | Model | MT Bench | IFEval | BBH | AGIEval | |-----------------------------------------------------------------------------------------------------|---------:|-------:|------:|--------:| | [zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1) | 8.17 | 65.06 | 58.96 | 44.16 | | [databricks/dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct) | 8.26 | 52.13 | 48.50 | 41.16 | | [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8.30 | 55.08 | 45.31 | 47.68 | ## Intended uses & limitations The model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # pip install 'transformers>=4.39.3' # pip install accelerate import torch from transformers import pipeline pipe = pipeline( "text-generation", model="HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1", device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ { "role": "system", "content": "You are Zephyr, a helpful assistant.", }, {"role": "user", "content": "Explain how Mixture of Experts work in language a child would understand."}, ] outputs = pipe( messages, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, ) print(outputs[0]["generated_text"][-1]["content"]) ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Zephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`mistral-community/Mixtral-8x22B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - total_train_batch_size: 32 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 ## Citation If you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper: ``` @misc{hong2024orpo, title={ORPO: Monolithic Preference Optimization without Reference Model}, author={Jiwoo Hong and Noah Lee and James Thorne}, year={2024}, eprint={2403.07691}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` You may also wish to cite the creators of this model: ``` @misc{zephyr_141b, author = {Alvaro Bartolome and Jiwoo Hong and Noah Lee and Kashif Rasul and Lewis Tunstall}, title = {Zephyr 141B A35B}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1}} } ```
{"license": "apache-2.0", "tags": ["trl", "orpo", "generated_from_trainer"], "datasets": ["argilla/distilabel-capybara-dpo-7k-binarized"], "base_model": "mistral-community/Mixtral-8x22B-v0.1", "model-index": [{"name": "zephyr-orpo-141b-A35b-v0.1", "results": []}]}
blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw3.5
null
[ "transformers", "safetensors", "mixtral", "text-generation", "trl", "orpo", "generated_from_trainer", "conversational", "dataset:argilla/distilabel-capybara-dpo-7k-binarized", "arxiv:2403.07691", "arxiv:2311.07911", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T14:50:32+00:00
[ "2403.07691", "2311.07911" ]
[]
TAGS #transformers #safetensors #mixtral #text-generation #trl #orpo #generated_from_trainer #conversational #dataset-argilla/distilabel-capybara-dpo-7k-binarized #arxiv-2403.07691 #arxiv-2311.07911 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src="URL alt="Zephyr 141B Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Model Card for Zephyr 141B-A35B =============================== Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 141B-A35B is the latest model in the series, and is a fine-tuned version of mistral-community/Mixtral-8x22B-v0.1 that was trained using a novel alignment algorithm called Odds Ratio Preference Optimization (ORPO) with 7k instances for 1.3 hours on 4 nodes of 8 x H100s. ORPO does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO. To train Zephyr-141B-A35B, we used the 'argilla/distilabel-capybara-dpo-7k-binarized' preference dataset, which consists of synthetic, high-quality, multi-turn preferences that have been scored via LLMs. > > [!NOTE] > This model was trained collaboratively between Argilla, KAIST, and Hugging Face > > > Model Details ------------- ### Model Description * Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets. * Language(s) (NLP): Primarily English. * License: Apache 2.0 * Finetuned from model: mistral-community/Mixtral-8x22B-v0.1 ### Model Sources * Repository: URL * Dataset: URL Performance ----------- Zephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. Intended uses & limitations --------------------------- The model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers: Bias, Risks, and Limitations ---------------------------- Zephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this. Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-06 * train\_batch\_size: 1 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 32 * total\_train\_batch\_size: 32 * total\_eval\_batch\_size: 256 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: inverse\_sqrt * lr\_scheduler\_warmup\_steps: 100 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.1 If you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper: You may also wish to cite the creators of this model:
[ "### Model Description\n\n\n* Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English.\n* License: Apache 2.0\n* Finetuned from model: mistral-community/Mixtral-8x22B-v0.1", "### Model Sources\n\n\n* Repository: URL\n* Dataset: URL\n\n\nPerformance\n-----------\n\n\nZephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nIt is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 32\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1\n\n\nIf you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:\n\n\nYou may also wish to cite the creators of this model:" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #trl #orpo #generated_from_trainer #conversational #dataset-argilla/distilabel-capybara-dpo-7k-binarized #arxiv-2403.07691 #arxiv-2311.07911 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Model Description\n\n\n* Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English.\n* License: Apache 2.0\n* Finetuned from model: mistral-community/Mixtral-8x22B-v0.1", "### Model Sources\n\n\n* Repository: URL\n* Dataset: URL\n\n\nPerformance\n-----------\n\n\nZephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nIt is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 32\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1\n\n\nIf you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:\n\n\nYou may also wish to cite the creators of this model:" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
relu-ntnu/pegasus-xsum_v0_trained_on_100
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-12T14:52:04+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
# anima_pencil-XL-v3 API Inference ![generated from modelslab.com](https://pub-3626123a908346a7a8be8d9295f44e26.r2.dev/generations/16529110991712933385.png) ## Get API Key Get API key from [ModelsLab API](http://modelslab.com), No Payment needed. Replace Key in below code, change **model_id** to "animapencil-xl-v3" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs) Try model for free: [Generate Images](https://modelslab.com/models/animapencil-xl-v3) Model link: [View model](https://modelslab.com/models/animapencil-xl-v3) View all models: [View Models](https://modelslab.com/models) import requests import json url = "https://modelslab.com/api/v6/images/text2img" payload = json.dumps({ "key": "your_api_key", "model_id": "animapencil-xl-v3", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(response.text) > Use this coupon code to get 25% off **DMGG0RBN**
{"license": "creativeml-openrail-m", "tags": ["modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic"], "pinned": true}
stablediffusionapi/animapencil-xl-v3
null
[ "diffusers", "modelslab.com", "stable-diffusion-api", "text-to-image", "ultra-realistic", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-04-12T14:53:07+00:00
[]
[]
TAGS #diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us
# anima_pencil-XL-v3 API Inference !generated from URL ## Get API Key Get API key from ModelsLab API, No Payment needed. Replace Key in below code, change model_id to "animapencil-xl-v3" Coding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs Try model for free: Generate Images Model link: View model View all models: View Models import requests import json url = "URL payload = URL({ "key": "your_api_key", "model_id": "animapencil-xl-v3", "prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K", "negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime", "width": "512", "height": "512", "samples": "1", "num_inference_steps": "30", "safety_checker": "no", "enhance_prompt": "yes", "seed": None, "guidance_scale": 7.5, "multi_lingual": "no", "panorama": "no", "self_attention": "no", "upscale": "no", "embeddings": "embeddings_model_id", "lora": "lora_model_id", "webhook": None, "track_id": None }) headers = { 'Content-Type': 'application/json' } response = requests.request("POST", url, headers=headers, data=payload) print(URL) > Use this coupon code to get 25% off DMGG0RBN
[ "# anima_pencil-XL-v3 API Inference\n\n!generated from URL", "## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"animapencil-xl-v3\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"animapencil-xl-v3\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN" ]
[ "TAGS\n#diffusers #modelslab.com #stable-diffusion-api #text-to-image #ultra-realistic #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionXLPipeline #region-us \n", "# anima_pencil-XL-v3 API Inference\n\n!generated from URL", "## Get API Key\n\nGet API key from ModelsLab API, No Payment needed. \n\nReplace Key in below code, change model_id to \"animapencil-xl-v3\"\n\nCoding in PHP/Node/Java etc? Have a look at docs for more code examples: View docs\n\nTry model for free: Generate Images\n\nModel link: View model\n\nView all models: View Models\n\n import requests \n import json \n \n url = \"URL \n \n payload = URL({ \n \"key\": \"your_api_key\", \n \"model_id\": \"animapencil-xl-v3\", \n \"prompt\": \"ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K\", \n \"negative_prompt\": \"painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime\", \n \"width\": \"512\", \n \"height\": \"512\", \n \"samples\": \"1\", \n \"num_inference_steps\": \"30\", \n \"safety_checker\": \"no\", \n \"enhance_prompt\": \"yes\", \n \"seed\": None, \n \"guidance_scale\": 7.5, \n \"multi_lingual\": \"no\", \n \"panorama\": \"no\", \n \"self_attention\": \"no\", \n \"upscale\": \"no\", \n \"embeddings\": \"embeddings_model_id\", \n \"lora\": \"lora_model_id\", \n \"webhook\": None, \n \"track_id\": None \n }) \n \n headers = { \n 'Content-Type': 'application/json' \n } \n \n response = requests.request(\"POST\", url, headers=headers, data=payload) \n \n print(URL)\n\n> Use this coupon code to get 25% off DMGG0RBN" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Richatte2000/my_awesome_model This model is a fine-tuned version of [almanach/camembert-base](https://huggingface.co/almanach/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4740 - Validation Loss: 0.4261 - Train Accuracy: 0.8116 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 665, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.5682 | 0.4833 | 0.7873 | 0 | | 0.4740 | 0.4261 | 0.8116 | 1 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_keras_callback"], "base_model": "almanach/camembert-base", "model-index": [{"name": "Richatte2000/my_awesome_model", "results": []}]}
Richatte2000/my_awesome_model
null
[ "transformers", "tf", "camembert", "text-classification", "generated_from_keras_callback", "base_model:almanach/camembert-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T14:53:25+00:00
[]
[]
TAGS #transformers #tf #camembert #text-classification #generated_from_keras_callback #base_model-almanach/camembert-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
Richatte2000/my\_awesome\_model =============================== This model is a fine-tuned version of almanach/camembert-base on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.4740 * Validation Loss: 0.4261 * Train Accuracy: 0.8116 * Epoch: 1 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'Adam', 'weight\_decay': None, 'clipnorm': None, 'global\_clipnorm': None, 'clipvalue': None, 'use\_ema': False, 'ema\_momentum': 0.99, 'ema\_overwrite\_frequency': None, 'jit\_compile': False, 'is\_legacy\_optimizer': False, 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 665, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.38.2 * TensorFlow 2.15.0 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': False, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 665, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tf #camembert #text-classification #generated_from_keras_callback #base_model-almanach/camembert-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'Adam', 'weight\\_decay': None, 'clipnorm': None, 'global\\_clipnorm': None, 'clipvalue': None, 'use\\_ema': False, 'ema\\_momentum': 0.99, 'ema\\_overwrite\\_frequency': None, 'jit\\_compile': False, 'is\\_legacy\\_optimizer': False, 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 665, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** DreamOnRain - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** mamba-1.4b-hf - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** Abirate/english_quotes ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"language": ["en"], "library_name": "peft", "datasets": ["Abirate/english_quotes"], "base_model": "state-spaces/mamba-1.4b-hf"}
DreamOnRain/mamba-1.4b-hf-quotes
null
[ "peft", "safetensors", "en", "dataset:Abirate/english_quotes", "arxiv:1910.09700", "base_model:state-spaces/mamba-1.4b-hf", "has_space", "region:us" ]
null
2024-04-12T14:54:46+00:00
[ "1910.09700" ]
[ "en" ]
TAGS #peft #safetensors #en #dataset-Abirate/english_quotes #arxiv-1910.09700 #base_model-state-spaces/mamba-1.4b-hf #has_space #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: DreamOnRain - Funded by [optional]: - Shared by [optional]: - Model type: mamba-1.4b-hf - Language(s) (NLP): - License: - Finetuned from model [optional]: Abirate/english_quotes ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: DreamOnRain\n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: mamba-1.4b-hf\n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]: Abirate/english_quotes", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #safetensors #en #dataset-Abirate/english_quotes #arxiv-1910.09700 #base_model-state-spaces/mamba-1.4b-hf #has_space #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: DreamOnRain\n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: mamba-1.4b-hf\n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]: Abirate/english_quotes", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
relu-ntnu/pegasus-xsum_v1_trained_on_100
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-12T14:55:01+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo facebook/xglm-564M installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/facebook-xglm-564M-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("facebook/xglm-564M") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/xglm-564M before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/facebook-xglm-564M-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "xglm", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-04-12T14:57:57+00:00
[]
[]
TAGS #transformers #safetensors #xglm #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo facebook/xglm-564M installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model facebook/xglm-564M before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/xglm-564M installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/xglm-564M before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #xglm #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo facebook/xglm-564M installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model facebook/xglm-564M before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
text-classification
bertopic
# impf_ukrain_postcov_all_sns_topics_umap_lok_hdbscan_lok_ctfidf_seed_13_prob This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("RolMax/impf_ukrain_postcov_all_sns_topics_umap_lok_hdbscan_lok_ctfidf_seed_13_prob") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 854 * Number of training documents: 91393 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | 0 | ich - du - bin - hab - mal | 2520 | 0_ich_du_bin_hab | | 1 | kinder - schulen - schulschließungen - eltern - kindern | 928 | 1_kinder_schulen_schulschließungen_eltern | | 2 | flüchtlinge - migranten - flüchtlingen - kriegsflüchtlinge - ukrainer | 810 | 2_flüchtlinge_migranten_flüchtlingen_kriegsflüchtlinge | | 3 | biolabore - biowaffen - biologische - biologischen - labore | 558 | 3_biolabore_biowaffen_biologische_biologischen | | 4 | polizei - polizisten - polizeigewalt - polizist - mann | 880 | 4_polizei_polizisten_polizeigewalt_polizist | | 5 | weihnachten - weihnachtsfest - weihnachtszeit - lockdown - feiern | 507 | 5_weihnachten_weihnachtsfest_weihnachtszeit_lockdown | | 6 | öl - gas - russland - russisches - sanktionen | 772 | 6_öl_gas_russland_russisches | | 7 | negativbewertungen - attacken - german - aufhalten - deutsch | 710 | 7_negativbewertungen_attacken_german_aufhalten | | 8 | mariupol - kiew - stadt - truppen - ukrainischen | 501 | 8_mariupol_kiew_stadt_truppen | | 9 | hersh - seymour - nord - stream - pipelines | 401 | 9_hersh_seymour_nord_stream | | 10 | trump - biden - hunter - cia - fbi | 491 | 10_trump_biden_hunter_cia | | 11 | krieg - military - trump - militär - soldaten | 429 | 11_krieg_military_trump_militär | | 12 | 18 - uhr - rathaus - 00 - markt | 294 | 12_18_uhr_rathaus_00 | | 13 | strasse - 12 - folgt - elternstehenauf - 20 | 559 | 13_strasse_12_folgt_elternstehenauf | | 14 | österreich - österreicher - övp - fpö - neuwahlen | 395 | 14_österreich_österreicher_övp_fpö | | 15 | richter - gericht - senat - bundesverfassungsgericht - abs | 545 | 15_richter_gericht_senat_bundesverfassungsgericht | | 16 | proteste - polizei - städten - corona - demonstranten | 356 | 16_proteste_polizei_städten_corona | | 17 | israel - juden - iran - jüdischen - jüdische | 297 | 17_israel_juden_iran_jüdischen | | 18 | partioten - gruß - tägliche - geoengineering - politische | 298 | 18_partioten_gruß_tägliche_geoengineering | | 19 | masken - maske - cochrane - tragen - maskenpflicht | 266 | 19_masken_maske_cochrane_tragen | | 20 | garten - selbstversorgung - pflanzen - gemüse - frischen | 228 | 20_garten_selbstversorgung_pflanzen_gemüse | | 21 | ukraine - krieg - ukrainekrieg - konflikt - ukrainer | 678 | 21_ukraine_krieg_ukrainekrieg_konflikt | | 22 | hacker - cyber - bsi - schwachstelle - mel | 232 | 22_hacker_cyber_bsi_schwachstelle | | 23 | strasse - folgt - mir - heute - gestern | 279 | 23_strasse_folgt_mir_heute | | 24 | gesellschaft - deine - menschen - spaltung - du | 410 | 24_gesellschaft_deine_menschen_spaltung | | 25 | liebe - eva - grüße - andreas - lieber | 283 | 25_liebe_eva_grüße_andreas | | 26 | ölpreis - barrel - öl - gas - gaspreise | 237 | 26_ölpreis_barrel_öl_gas | | 27 | spd - berlin - giffey - wahl - rot | 303 | 27_spd_berlin_giffey_wahl | | 28 | telegram - gu - facebook - folgt - instagram | 254 | 28_telegram_gu_facebook_folgt | | 29 | putin - putins - wladimir - waters - vladimir | 460 | 29_putin_putins_wladimir_waters | | 30 | sanktionen - russland - eu - flugzeuge - verhängt | 308 | 30_sanktionen_russland_eu_flugzeuge | | 31 | english - reviews - means - attacks - cannot | 231 | 31_english_reviews_means_attacks | | 32 | ignazbearth - wertschätzung - me - läuft - paypal | 206 | 32_ignazbearth_wertschätzung_me_läuft | | 33 | reichelt - journalisten - medien - journalismus - mainstream | 330 | 33_reichelt_journalisten_medien_journalismus | | 34 | energiepreise - gas - benzin - strompreise - cent | 379 | 34_energiepreise_gas_benzin_strompreise | | 35 | erdbeben - türkei - syrien - beben - syrischen | 353 | 35_erdbeben_türkei_syrien_beben | | 36 | weizen - getreide - mais - weltweiten - preise | 233 | 36_weizen_getreide_mais_weltweiten | | 37 | kinder - stiko - impfung - kindern - impfkommission | 217 | 37_kinder_stiko_impfung_kindern | | 38 | trinkwasser - wasser - gechlort - wasserversorgung - stadtwerke | 175 | 38_trinkwasser_wasser_gechlort_wasserversorgung | | 39 | russen - russische - russland - russischen - russia | 506 | 39_russen_russische_russland_russischen | | 40 | impfung - impfstoffe - impfstoff - impfungen - nebenwirkungen | 777 | 40_impfung_impfstoffe_impfstoff_impfungen | | 41 | fleisch - lidl - bäckereien - landwirtschaft - zutaten | 205 | 41_fleisch_lidl_bäckereien_landwirtschaft | | 42 | china - peking - chinas - chinesische - sanktionen | 161 | 42_china_peking_chinas_chinesische | | 43 | twitter - tweet - ceiberweiber - whatsapp - account | 193 | 43_twitter_tweet_ceiberweiber_whatsapp | | 44 | märz - maskenpflicht - corona - infektionsschutzgesetz - maßnahmen | 220 | 44_märz_maskenpflicht_corona_infektionsschutzgesetz | | 45 | nord - stream - pipeline - ostsee - inbetriebnahme | 152 | 45_nord_stream_pipeline_ostsee | | 46 | truth - exposed - cannot - reviews - attacks | 258 | 46_truth_exposed_cannot_reviews | | 47 | ärzte - medizin - mediziner - brief - ärztekammer | 266 | 47_ärzte_medizin_mediziner_brief | | 48 | insekten - lebensmitteln - extrakt - lebensmittel - maaßen | 152 | 48_insekten_lebensmitteln_extrakt_lebensmittel | | 49 | pfizer - fda - nebenwirkungen - dokumente - arena | 214 | 49_pfizer_fda_nebenwirkungen_dokumente | | 50 | honig - manuka - dmso - schmerzen - entzündungen | 174 | 50_honig_manuka_dmso_schmerzen | | 51 | video - videos - trailer - musikvideo - dieses | 238 | 51_video_videos_trailer_musikvideo | | 52 | telegram - app - faeser - kretschmer - messengerdienst | 139 | 52_telegram_app_faeser_kretschmer | | 53 | neuestes - zerstörten - zerstückelung - versteckten - ruin | 134 | 53_neuestes_zerstörten_zerstückelung_versteckten | | 54 | lauterbach - karl - gesundheitsminister - lauterbachs - bundesgesundheitsminister | 209 | 54_lauterbach_karl_gesundheitsminister_lauterbachs | | 55 | impfpflicht - österreich - verhältnismäßig - ausgesetzt - abgeschafft | 192 | 55_impfpflicht_österreich_verhältnismäßig_ausgesetzt | | 56 | währung - märz - währungen - digitale - dinar | 233 | 56_währung_märz_währungen_digitale | | 57 | ufo - ufos - aliens - alien - außerirdische | 131 | 57_ufo_ufos_aliens_alien | | 58 | widerstand - demo - demonstrationen - berufsbranchen - freiheit | 260 | 58_widerstand_demo_demonstrationen_berufsbranchen | | 59 | ballon - chinese - ballons - spy - balloons | 169 | 59_ballon_chinese_ballons_spy | | 60 | nato - weltkrieg - krieg - springer - döpfner | 287 | 60_nato_weltkrieg_krieg_springer | | 61 | selenskyj - wolodymyr - präsident - ukrainische - selenskyjs | 210 | 61_selenskyj_wolodymyr_präsident_ukrainische | | 62 | tschernobyl - saporischschja - atomkraftwerk - kernkraftwerk - akw | 153 | 62_tschernobyl_saporischschja_atomkraftwerk_kernkraftwerk | | 63 | germany - devastated - my - end - latest | 131 | 63_germany_devastated_my_end | | 64 | klimawandel - erwärmung - klima - grad - fossile | 148 | 64_klimawandel_erwärmung_klima_grad | | 65 | patriots - greetings - personal - go - my | 164 | 65_patriots_greetings_personal_go | | 66 | leopard - panzer - kampfpanzer - lieferung - panzern | 117 | 66_leopard_panzer_kampfpanzer_lieferung | | 67 | impfpflicht - protest - berlin - impffreiheit - protesten | 98 | 67_impfpflicht_protest_berlin_impffreiheit | | 68 | bennett - chabad - naftali - israels - selenskyj | 127 | 68_bennett_chabad_naftali_israels | | 69 | impfpflicht - bundestag - allgemeine - impfpflichtgesetz - allgemeinen | 276 | 69_impfpflicht_bundestag_allgemeine_impfpflichtgesetz | | 70 | tornados - kentucky - beshear - tornado - bundesstaat | 101 | 70_tornados_kentucky_beshear_tornado | | 71 | maxwell - epstein - ghislaine - epsteins - andrew | 139 | 71_maxwell_epstein_ghislaine_epsteins | | 72 | kirche - erzbischof - vigano - kirchen - katholischen | 135 | 72_kirche_erzbischof_vigano_kirchen | | 73 | per - banküberweisung - einzelstream - iban - krankheiten | 142 | 73_per_banküberweisung_einzelstream_iban | | 74 | türkei - turkey - chapter - türkiye - türken | 197 | 74_türkei_turkey_chapter_türkiye | | 75 | australien - bmw - sturgeon - bernie - neuseeland | 103 | 75_australien_bmw_sturgeon_bernie | | 76 | grad - schnee - wetter - kälte - wetterdienst | 105 | 76_grad_schnee_wetter_kälte | | 77 | mrna - dna - rna - pfizer - technologie | 183 | 77_mrna_dna_rna_pfizer | | 78 | ifo - wirtschaft - prozent - rezession - inflation | 331 | 78_ifo_wirtschaft_prozent_rezession | | 79 | autos - auto - elektroautos - elektromobilität - pkw | 174 | 79_autos_auto_elektroautos_elektromobilität | | 80 | shenzhen - china - hongkong - qr - chinesischen | 113 | 80_shenzhen_china_hongkong_qr | | 81 | angst - panik - entwarnung - furcht - wut | 188 | 81_angst_panik_entwarnung_furcht | | 82 | freie - beitreten - medienarbeit - medien - unabhängige | 137 | 82_freie_beitreten_medienarbeit_medien | | 83 | schiff - häfen - hafen - schiffe - container | 126 | 83_schiff_häfen_hafen_schiffe | | 84 | grünen - grüne - greenpeace - hoffmann - christiane | 134 | 84_grünen_grüne_greenpeace_hoffmann | | 85 | wähler - wahlen - partei - fdp - fpö | 204 | 85_wähler_wahlen_partei_fdp | | 86 | 12 - 2021 - pm - 17 - 21 | 194 | 86_12_2021_pm_17 | | 87 | ignazbearth - wertschätzung - me - strasse - paypal | 143 | 87_ignazbearth_wertschätzung_me_strasse | | 88 | merkel - angela - friedenspreis - unesco - merkels | 101 | 88_merkel_angela_friedenspreis_unesco | | 89 | einzelhandel - 2g - lüneburg - regel - oberverwaltungsgericht | 85 | 89_einzelhandel_2g_lüneburg_regel | | 90 | russland - nato - usa - russlands - westen | 543 | 90_russland_nato_usa_russlands | | 91 | attkisson - sharyl - impfstreik - streik - impfzwang | 101 | 91_attkisson_sharyl_impfstreik_streik | | 92 | nesara - gesara - plan - lernst - biblisch | 107 | 92_nesara_gesara_plan_lernst | | 93 | satanisten - satanismus - satanische - zion - satan | 108 | 93_satanisten_satanismus_satanische_zion | | 94 | freiheit - nelson - sklaven - willst - du | 256 | 94_freiheit_nelson_sklaven_willst | | 95 | wien - mega - demo - kundgebung - heldenplatz | 188 | 95_wien_mega_demo_kundgebung | | 96 | budapest - antifa - ungarn - linksextremisten - engel | 100 | 96_budapest_antifa_ungarn_linksextremisten | | 97 | bechern - gusseisen - tee - thecrowhouse - ausnahmslos | 101 | 97_bechern_gusseisen_tee_thecrowhouse | | 98 | innenfach - reißverschluss - abnehmbare - hüfttasche - arctic | 100 | 98_innenfach_reißverschluss_abnehmbare_hüfttasche | | 99 | frauen - gender - männer - frau - feminismus | 99 | 99_frauen_gender_männer_frau | | 100 | inspired - solidarität - holnburger - nlp - derwegscheider | 274 | 100_inspired_solidarität_holnburger_nlp | | 101 | steuern - grundeinkommen - steuer - staat - quellensteuer | 176 | 101_steuern_grundeinkommen_steuer_staat | | 102 | bitcoin - kryptowährungen - ethereum - sparkassen - monese | 117 | 102_bitcoin_kryptowährungen_ethereum_sparkassen | | 103 | who - pandemievertrag - abkommen - mitgliedsstaaten - gecko | 107 | 103_who_pandemievertrag_abkommen_mitgliedsstaaten | | 104 | teamheimat - jahn - carsten - heimat - deutschland | 185 | 104_teamheimat_jahn_carsten_heimat | | 105 | palma - la - vulkan - butter - eruption | 128 | 105_palma_la_vulkan_butter | | 106 | nato - krieg - ukraine - eu - russland | 315 | 106_nato_krieg_ukraine_eu | | 107 | inflation - inflationsrate - prozent - verbraucherpreise - preise | 161 | 107_inflation_inflationsrate_prozent_verbraucherpreise | | 108 | buch - charakter - tangsworld - hauptaspekte - bl | 137 | 108_buch_charakter_tangsworld_hauptaspekte | | 109 | löschautomatik - kochgelegenheit - ölpumpen - steuerungen - warmer | 79 | 109_löschautomatik_kochgelegenheit_ölpumpen_steuerungen | | 110 | visa - banken - mastercard - swift - sanktionen | 171 | 110_visa_banken_mastercard_swift | | 111 | zerstückelung - versteckten - ruin - ziele - europas | 79 | 111_zerstückelung_versteckten_ruin_ziele | | 112 | stromausfall - betroffen - strom - störung - telekom | 134 | 112_stromausfall_betroffen_strom_störung | | 113 | mrna - dna - impfstoff - impfstoffe - malone | 184 | 113_mrna_dna_impfstoff_impfstoffe | | 114 | dresden - 1945 - dresdens - gedenken - kremerskothen | 97 | 114_dresden_1945_dresdens_gedenken | | 115 | geschützt - niedrigstand - lagerbestand - eco - begrenzter | 104 | 115_geschützt_niedrigstand_lagerbestand_eco | | 116 | mw - kohleausstieg - wasserstoff - energie - 2030 | 181 | 116_mw_kohleausstieg_wasserstoff_energie | | 117 | reitschuster - boris - bundespressekonferenz - zensurwelle - zensurfreien | 123 | 117_reitschuster_boris_bundespressekonferenz_zensurwelle | | 118 | corona - maßnahmen - aufarbeitung - pandemie - politik | 286 | 118_corona_maßnahmen_aufarbeitung_pandemie | | 119 | cdl - lufthansa - flughafen - patentierbar - preiswert | 131 | 119_cdl_lufthansa_flughafen_patentierbar | | 120 | kanal - kollateral - playlist - generell - abonnieren | 111 | 120_kanal_kollateral_playlist_generell | | 121 | frieden - wagenknecht - 2023 - muc1802 - brandenburger | 201 | 121_frieden_wagenknecht_2023_muc1802 | | 122 | 5g - emf - mobilfunk - strahlung - 2004 | 82 | 122_5g_emf_mobilfunk_strahlung | | 123 | convoy - trucker - konvoi - dc - washington | 98 | 123_convoy_trucker_konvoi_dc | | 124 | germany - devastated - latest - end - my | 131 | 124_germany_devastated_latest_end | | 125 | frieden - gandhi - denkt - selbstfaktenfriedenfreiheit - mahatma | 146 | 125_frieden_gandhi_denkt_selbstfaktenfriedenfreiheit | | 126 | todesfälle - covid - kindern - dosis - dunkelziffer | 154 | 126_todesfälle_covid_kindern_dosis | | 127 | duran - emc2 - 0xd449694348b1d618eca2829bbc901782f5172689 - exx4kk9pzlx7uilwncxtp7imkjtq6o5b6r - hex | 75 | 127_duran_emc2_0xd449694348b1d618eca2829bbc901782f5172689_exx4kk9pzlx7uilwncxtp7imkjtq6o5b6r | | 128 | übersterblichkeit - schweden - sterbefallzahlen - mittleren - todesfälle | 100 | 128_übersterblichkeit_schweden_sterbefallzahlen_mittleren | | 129 | jane - ruby - peters - get - dr | 98 | 129_jane_ruby_peters_get | | 130 | störenfriede - survivaldecke - grill - freiverkäuflich - rettungsdecke | 109 | 130_störenfriede_survivaldecke_grill_freiverkäuflich | | 131 | facebook - lula - bolsa - bertelsmann - kugelschreiber | 96 | 131_facebook_lula_bolsa_bertelsmann | | 132 | apolut - app - ios - huawei - kostenlose | 113 | 132_apolut_app_ios_huawei | | 133 | regelmäßige - ca - spaziergänge - teilnehmer - spaziergang | 128 | 133_regelmäßige_ca_spaziergänge_teilnehmer | | 134 | quade - herr - wiens - löst - antifa | 98 | 134_quade_herr_wiens_löst | | 135 | amazon - verlag - verstärkung - kopp - europe | 92 | 135_amazon_verlag_verstärkung_kopp | | 136 | deutschland - deutschen - deutsche - strengsten - politestablishment | 163 | 136_deutschland_deutschen_deutsche_strengsten | | 137 | nuklearen - usa - iss - us - atomkrieg | 229 | 137_nuklearen_usa_iss_us | | 138 | gold - goldpreis - dollar - goldmünzen - goldgeld | 79 | 138_gold_goldpreis_dollar_goldmünzen | | 139 | deutschen - deutschland - deutsche - okkupation - reich | 212 | 139_deutschen_deutschland_deutsche_okkupation | | 140 | netzfund - rathaus - vital - entwickeltes - backmischungen | 72 | 140_netzfund_rathaus_vital_entwickeltes | | 141 | pharma - pharmaindustrie - medikamente - barrington - medikament | 258 | 141_pharma_pharmaindustrie_medikamente_barrington | | 142 | freedom - naturalnews - remedies - passionately - healing | 95 | 142_freedom_naturalnews_remedies_passionately | | 143 | info - direkt - haimbuchner - fpö - ganser | 165 | 143_info_direkt_haimbuchner_fpö | | 144 | polens - verschwiegene - compact - geschichtsheft - schuld | 66 | 144_polens_verschwiegene_compact_geschichtsheft | | 145 | unterstütze - anmelden - rutter - newsletter - ihm | 62 | 145_unterstütze_anmelden_rutter_newsletter | | 146 | afd - vorsitz - democracy - abgeordnete - partei | 130 | 146_afd_vorsitz_democracy_abgeordnete | | 147 | werbematerial - lausen - tom - sendung - intensivzahlen | 223 | 147_werbematerial_lausen_tom_sendung | | 148 | rothschild - rothschilds - logen - familie - dynastie | 63 | 148_rothschild_rothschilds_logen_familie | | 149 | al - islamisten - islamischen - syrien - islam | 109 | 149_al_islamisten_islamischen_syrien | | 150 | orf - gis - zwangsgebühren - gebühren - ziegler | 90 | 150_orf_gis_zwangsgebühren_gebühren | | 151 | greetings - patriots - personal - go - my | 96 | 151_greetings_patriots_personal_go | | 152 | habeck - robert - wirtschaftsminister - reduction - vizekanzler | 128 | 152_habeck_robert_wirtschaftsminister_reduction | | 153 | schöning - kommunismus - weltordnung - abendlandes - heiko | 218 | 153_schöning_kommunismus_weltordnung_abendlandes | | 154 | handy - ortung - privatsphäre - spionage - smart | 113 | 154_handy_ortung_privatsphäre_spionage | | 155 | mückstein - rücktritt - gesundheitsminister - wolfgang - sponsern | 85 | 155_mückstein_rücktritt_gesundheitsminister_wolfgang | | 156 | russische - russland - kanals - internet - cnn | 225 | 156_russische_russland_kanals_internet | | 157 | erwachenbefreiung - eindecken - popcorn - zugreifen - medwedew | 82 | 157_erwachenbefreiung_eindecken_popcorn_zugreifen | | 158 | beschreibung - lunch - kurkuma - box - diabetes | 59 | 158_beschreibung_lunch_kurkuma_box | | 159 | id - cbdc - sucharit - bhakdi - digitale | 59 | 159_id_cbdc_sucharit_bhakdi | | 160 | münchen - hamburg - strasse - hannover - bielefeld | 124 | 160_münchen_hamburg_strasse_hannover | | 161 | 20 - 2021 - 12 - denkt - ravensburg | 210 | 161_20_2021_12_denkt | | 162 | wissenschaft - wissenschaftler - krankenstände - pädophilem - abschiedssong | 143 | 162_wissenschaft_wissenschaftler_krankenstände_pädophilem | | 163 | vitamin - d3 - ester - gentechnik - trägt | 78 | 163_vitamin_d3_ester_gentechnik | | 164 | transgender - geschlecht - frauen - papierkrieg - kindermädchenstaat | 107 | 164_transgender_geschlecht_frauen_papierkrieg | | 165 | neubeginn - mfg - grundrechte - diskurs - österreich | 68 | 165_neubeginn_mfg_grundrechte_diskurs | | 166 | heraklit - bivy - ultralite - folie - heraklits | 85 | 166_heraklit_bivy_ultralite_folie | | 167 | eidenberger - emotionen - mayerweck - psychedelika - psychopathen | 78 | 167_eidenberger_emotionen_mayerweck_psychedelika | | 168 | angst - risch - yale - repräsentiert - 92k | 151 | 168_angst_risch_yale_repräsentiert | | 169 | verwaltungsgerichtshof - 2g - hochschulen - vgh - präsenzveranstaltungen | 85 | 169_verwaltungsgerichtshof_2g_hochschulen_vgh | | 170 | dlive - bali - klima - klimaschützer - generation | 91 | 170_dlive_bali_klima_klimaschützer | | 171 | hitler - nazis - nazi - adolf - reich | 247 | 171_hitler_nazis_nazi_adolf | | 172 | mig - polen - kampfjets - 29 - ramstein | 81 | 172_mig_polen_kampfjets_29 | | 173 | bitcoin - 19q8odiu2zar7dfl18ouqivwauvnripceu - 1wxoeuy6ghetkmurdiipllwvya1vh2iwa - core - 10514 | 83 | 173_bitcoin_19q8odiu2zar7dfl18ouqivwauvnripceu_1wxoeuy6ghetkmurdiipllwvya1vh2iwa_core | | 174 | kolloidales - silber - bakterien - pilze - meistverkaufte | 55 | 174_kolloidales_silber_bakterien_pilze | | 175 | navy - trinkwasserqualität - wisnewski - seal - graham | 89 | 175_navy_trinkwasserqualität_wisnewski_seal | | 176 | stuttgart - leibnitz - 30 - uhr - 17 | 117 | 176_stuttgart_leibnitz_30_uhr | | 177 | 00 - uhr - hauptplatz - nürnberg - 12 | 154 | 177_00_uhr_hauptplatz_nürnberg | | 178 | politiker - ministerin - plagiats - hausverbot - zadic | 166 | 178_politiker_ministerin_plagiats_hausverbot | | 179 | youtube - schau - here - kinderschändens - dir | 205 | 179_youtube_schau_here_kinderschändens | | 180 | pcr - test - tests - hesch - cdc | 67 | 180_pcr_test_tests_hesch | | 181 | ohio - zug - raststätten - tankkarten - fahrer | 74 | 181_ohio_zug_raststätten_tankkarten | | 182 | eu - unversehrtheit - europarat - union - recht | 69 | 182_eu_unversehrtheit_europarat_union | | 183 | 0xf39bdfb41f639b82e3d2bf022828bc6394f533a3 - 3jvdnoywmb93hsrgk58zstuxg11pw9mksr - ada - ltc - mgfimc18zvif6dccixmqaap11tg4tf6acj | 62 | 183_0xf39bdfb41f639b82e3d2bf022828bc6394f533a3_3jvdnoywmb93hsrgk58zstuxg11pw9mksr_ada_ltc | | 184 | lauterbach - bkk - karl - bundesgesundheitsminister - gesundheitsminister | 103 | 184_lauterbach_bkk_karl_bundesgesundheitsminister | | 185 | wehrpflicht - bundeswehr - truppe - amtshilfe - soldaten | 195 | 185_wehrpflicht_bundeswehr_truppe_amtshilfe | | 186 | frankreich - biographie - brandneuen - impfpass - erschienen | 83 | 186_frankreich_biographie_brandneuen_impfpass | | 187 | einsatzstiefel - squad - inch - stiefel - sportlicher | 53 | 187_einsatzstiefel_squad_inch_stiefel | | 188 | 02 - 2023 - 06 - 13 - 2023folgt | 146 | 188_02_2023_06_13 | | 189 | berlin - münchen - dresden - bämm - northeim | 153 | 189_berlin_münchen_dresden_bämm | | 190 | gott - devil - gottes - golubice - uz | 145 | 190_gott_devil_gottes_golubice | | 191 | coronavirus - rki - impfung - corona - fremdschutz | 254 | 191_coronavirus_rki_impfung_corona | | 192 | drohne - zagreb - unbemannte - hubschrauber - drohnen | 109 | 192_drohne_zagreb_unbemannte_hubschrauber | | 193 | ausgängen - usb - aufladung - netzsteckdosen - kfz | 63 | 193_ausgängen_usb_aufladung_netzsteckdosen | | 194 | kimmich - joshua - kimmichs - zdf - fußball | 62 | 194_kimmich_joshua_kimmichs_zdf | | 195 | dr - facharzt - univ - evidenzbasierte - medizin | 81 | 195_dr_facharzt_univ_evidenzbasierte | | 196 | erdbeben - fukushima - stärke - papers - offshore | 116 | 196_erdbeben_fukushima_stärke_papers | | 197 | diesel - liter - benzin - e10 - euro | 107 | 197_diesel_liter_benzin_e10 | | 198 | river - elektrogeräte - laden - wandsteckdosen - branchenstandard | 89 | 198_river_elektrogeräte_laden_wandsteckdosen | | 199 | mittelerde - müller - mah - joachim - soul | 68 | 199_mittelerde_müller_mah_joachim | | 200 | balaton - deutschsprachigen - gemeinschaft - arslan - küssel | 82 | 200_balaton_deutschsprachigen_gemeinschaft_arslan | | 201 | anne - spiegel - bundesfamilienministerin - rheinland - flutkatastrophe | 64 | 201_anne_spiegel_bundesfamilienministerin_rheinland | | 202 | antarktis - antarctica - byrd - admiral - gleason | 75 | 202_antarktis_antarctica_byrd_admiral | | 203 | zensur - blockierungen - qualitätssiegel - löschungen - strikes | 99 | 203_zensur_blockierungen_qualitätssiegel_löschungen | | 204 | impfpflicht - allgemeine - deutschland - 2029 - impfstoff | 226 | 204_impfpflicht_allgemeine_deutschland_2029 | | 205 | cum - guntermann - kohn - tschentscher - ex | 121 | 205_cum_guntermann_kohn_tschentscher | | 206 | briefe - kinder - eintrudeln - eingeschrieben - düsseldorf | 55 | 206_briefe_kinder_eintrudeln_eingeschrieben | | 207 | janich - mg - philippinischen - behörden - philippinen | 113 | 207_janich_mg_philippinischen_behörden | | 208 | neutralität - österreichs - immerwährende - österreichischen - dranbleiben | 64 | 208_neutralität_österreichs_immerwährende_österreichischen | | 209 | taschenmesser - forester - funktionen - victorinox - holzsäge | 91 | 209_taschenmesser_forester_funktionen_victorinox | | 210 | versteckte - ressourcen - nutzen - reformationszeit - stupidity | 60 | 210_versteckte_ressourcen_nutzen_reformationszeit | | 211 | edelstahl - perkolator - perkomax - petromax - helli | 49 | 211_edelstahl_perkolator_perkomax_petromax | | 212 | reawaken - tour - sold - eintopf - stauraum | 97 | 212_reawaken_tour_sold_eintopf | | 213 | münchen - bielefeld - münchen2212 - 2021 - schellingstrasse | 144 | 213_münchen_bielefeld_münchen2212_2021 | | 214 | wien - menschenfischen - 12 - rausausderblase - demo | 141 | 214_wien_menschenfischen_12_rausausderblase | | 215 | wochenende - exxtrafrüh - wünscht - gutenmorgen - schönen | 87 | 215_wochenende_exxtrafrüh_wünscht_gutenmorgen | | 216 | zerstückelung - versteckten - ruin - ziele - europas | 54 | 216_zerstückelung_versteckten_ruin_ziele | | 217 | rechtsextremismus - rechtsextrem - matter - lives - nancy | 122 | 217_rechtsextremismus_rechtsextrem_matter_lives | | 218 | gunnar - kaiser - kaisertv - kanalmitgliedschaft - seminaren | 78 | 218_gunnar_kaiser_kaisertv_kanalmitgliedschaft | | 219 | grosz - gerald - geraldgrosz - oe24 - com | 54 | 219_grosz_gerald_geraldgrosz_oe24 | | 220 | oli - kanalmitglied - spende - negativbewertungen - attacken | 78 | 220_oli_kanalmitglied_spende_negativbewertungen | | 221 | novavax - impfstoff - totimpfstoff - valneva - bulgarien | 54 | 221_novavax_impfstoff_totimpfstoff_valneva | | 222 | perspektiven - arge - veranstalten - menschenrechte - andauernde | 83 | 222_perspektiven_arge_veranstalten_menschenrechte | | 223 | cannabis - cannergrow - feenstaub - pflanze - gourmet | 72 | 223_cannabis_cannergrow_feenstaub_pflanze | | 224 | paypal - bankverbindung - manumittas - förderer - zensursicheren | 75 | 224_paypal_bankverbindung_manumittas_förderer | | 225 | orban - ungarn - viktor - ungarische - orbán | 58 | 225_orban_ungarn_viktor_ungarische | | 226 | sars - cov - virus - covid - coronavirus | 98 | 226_sars_cov_virus_covid | | 227 | bodensafe - stk - bestelle - xl - ölbasis | 75 | 227_bodensafe_stk_bestelle_xl | | 228 | corona - impfpflicht - 2029 - coronavirus - euro | 126 | 228_corona_impfpflicht_2029_coronavirus | | 229 | patienten - rki - ecmo - krankenhaus - sterberate | 149 | 229_patienten_rki_ecmo_krankenhaus | | 230 | verschwörungstheorie - bowl - verschwörungstheorien - verschwörungstheoretiker - trek | 116 | 230_verschwörungstheorie_bowl_verschwörungstheorien_verschwörungstheoretiker | | 231 | gasheizung - q10 - mobile - flamme - heater | 90 | 231_gasheizung_q10_mobile_flamme | | 232 | samstag - freiheit - weiz - demo - heldenplatz | 99 | 232_samstag_freiheit_weiz_demo | | 233 | schüler - kalender - ortsgruppenübersichttelegram - zulässig - christopher | 110 | 233_schüler_kalender_ortsgruppenübersichttelegram_zulässig | | 234 | supergesunde - dörrautomat - vielfältige - infrarot - dörren | 53 | 234_supergesunde_dörrautomat_vielfältige_infrarot | | 235 | eu - milliarden - dollar - millionen - euro | 119 | 235_eu_milliarden_dollar_millionen | | 236 | karmasin - sophie - familienministerin - övp - festgenommen | 78 | 236_karmasin_sophie_familienministerin_övp | | 237 | luxusimmobilie - gesundheitswesen - ärztekammer - okkupierten - kündigung | 90 | 237_luxusimmobilie_gesundheitswesen_ärztekammer_okkupierten | | 238 | starlink - musk - spacex - cic - rieger | 83 | 238_starlink_musk_spacex_cic | | 239 | prozesse - wcr - entzündliche - graphene - astaxanthin | 168 | 239_prozesse_wcr_entzündliche_graphene | | 240 | pflegekräfte - patienten - pflegeheimen - 93 - pfleger | 108 | 240_pflegekräfte_patienten_pflegeheimen_93 | | 241 | virus - viren - mikroben - pcr - test | 91 | 241_virus_viren_mikroben_pcr | | 242 | hallo - überweisung - meinung - 7605 - 0013 | 60 | 242_hallo_überweisung_meinung_7605 | | 243 | stew - content - advertise - episodes - shedding | 45 | 243_stew_content_advertise_episodes | | 244 | cbdc - umdrehungen - handgenerators - einsatzfähig - schlechtem | 74 | 244_cbdc_umdrehungen_handgenerators_einsatzfähig | | 245 | ukrainerin - vergewaltigt - tunesier - hotelschiff - düsseldorf | 72 | 245_ukrainerin_vergewaltigt_tunesier_hotelschiff | | 246 | gold - mehrwertsteuer - edelmetallen - rubel - goldbarren | 50 | 246_gold_mehrwertsteuer_edelmetallen_rubel | | 247 | virus - coronavirus - drosten - variante - omikron | 211 | 247_virus_coronavirus_drosten_variante | | 248 | galgant - posch - hildegard - naturheilkundler - heilkraft | 126 | 248_galgant_posch_hildegard_naturheilkundler | | 249 | ecoflow - humanus - codex - maitrunk - powerstation | 97 | 249_ecoflow_humanus_codex_maitrunk | | 250 | korund - selbstschärfender - mahlsteinen - getreides - getrocknete | 44 | 250_korund_selbstschärfender_mahlsteinen_getreides | | 251 | covid - 19 - impfung - impfpflichtgesetz - impfstoffe | 226 | 251_covid_19_impfung_impfpflichtgesetz | | 252 | innsbruck - tiroler - tageszeitung - eisbrocken - organisationsgruppen | 44 | 252_innsbruck_tiroler_tageszeitung_eisbrocken | | 253 | schlaf - nacht - schlafen - hellwach - wälzt | 55 | 253_schlaf_nacht_schlafen_hellwach | | 254 | k2 - vitamin - fettlösliches - mct - erhaltung | 44 | 254_k2_vitamin_fettlösliches_mct | | 255 | preise - teurer - milch - butter - produkte | 70 | 255_preise_teurer_milch_butter | | 256 | schweden - greenpass - novaccinepassportsanywhere - dänemark - novaccinemandates | 44 | 256_schweden_greenpass_novaccinepassportsanywhere_dänemark | | 257 | superheilmittel - wirkstoffklasse - herkömmliches - anwendungsbereich - krankheitsbild | 49 | 257_superheilmittel_wirkstoffklasse_herkömmliches_anwendungsbereich | | 258 | zeitung - berliner - lobbyarbeit - rechtsbruch - zulassungsdesaster | 86 | 258_zeitung_berliner_lobbyarbeit_rechtsbruch | | 259 | danke - dankeschön - dank - vielen - gelernt | 57 | 259_danke_dankeschön_dank_vielen | | 260 | wieland - astrozyten - günter - gelöscht - funkgerät | 97 | 260_wieland_astrozyten_günter_gelöscht | | 261 | faeser - hessen - nancy - spd - bundesinnenministerin | 98 | 261_faeser_hessen_nancy_spd | | 262 | schuler - gunnar - gleichschritt - kaiser - ralf | 56 | 262_schuler_gunnar_gleichschritt_kaiser | | 263 | foto - unregierbar - milov - élysée - seelenreise | 70 | 263_foto_unregierbar_milov_élysée | | 264 | manu - suat - catherine - gestorben - manchester | 93 | 264_manu_suat_catherine_gestorben | | 265 | investieren - aktiendepot - investition - etoro - sparpläne | 69 | 265_investieren_aktiendepot_investition_etoro | | 266 | lampe - usb - stirnlampe - mini - perfekte | 54 | 266_lampe_usb_stirnlampe_mini | | 267 | bp - seenotration - müsliriegel - lebensmittelbevorratung - notverpflegung | 61 | 267_bp_seenotration_müsliriegel_lebensmittelbevorratung | | 268 | facebook - salzkristall - leuchte - meta - diffuser | 72 | 268_facebook_salzkristall_leuchte_meta | | 269 | nontschew - mirco - obduktion - baldwin - riedel | 70 | 269_nontschew_mirco_obduktion_baldwin | | 270 | gesundheit - codex - humanus - broschüre - krankheiten | 153 | 270_gesundheit_codex_humanus_broschüre | | 271 | emf - c60evo - code - evui - discount | 75 | 271_emf_c60evo_code_evui | | 272 | catherine - vimeo - marc - frank - com | 104 | 272_catherine_vimeo_marc_frank | | 273 | frieden - selbstbestimmung - 02 - freiheit - fürth | 90 | 273_frieden_selbstbestimmung_02_freiheit | | 274 | kallistalk - danke - dank - voraus - vielen | 91 | 274_kallistalk_danke_dank_voraus | | 275 | mittelerde - denkt - selbstfaktenfriedenfreiheit - flyer - shorts | 63 | 275_mittelerde_denkt_selbstfaktenfriedenfreiheit_flyer | | 276 | oven - dutch - leicht - standbodenbeutel - hauptmahlzeiten | 71 | 276_oven_dutch_leicht_standbodenbeutel | | 277 | registriert - 2021 - 2026 - hoffnungsbotschaft - regierungstreuen | 85 | 277_registriert_2021_2026_hoffnungsbotschaft | | 278 | petromax - touren - assault - attraktiven - pack | 62 | 278_petromax_touren_assault_attraktiven | | 279 | guten - morgen - valentinstag - schönen - gegenuni | 104 | 279_guten_morgen_valentinstag_schönen | | 280 | kanada - trudeau - canada - trucker - kanadischen | 44 | 280_kanada_trudeau_canada_trucker | | 281 | music - discord - contribution - brown - minds | 41 | 281_music_discord_contribution_brown | | 282 | katastrophen - überleben - handbuch - krisen - survival | 45 | 282_katastrophen_überleben_handbuch_krisen | | 283 | neuestes - zerstörten - wasserstandsmeldung - boden - mäckle | 55 | 283_neuestes_zerstörten_wasserstandsmeldung_boden | | 284 | kickl - strache - herbert - jugend - eskalation | 91 | 284_kickl_strache_herbert_jugend | | 285 | lauterbach - karl - bundesgesundheitsminister - impfstoff - impfungen | 77 | 285_lauterbach_karl_bundesgesundheitsminister_impfstoff | | 286 | rt - raketenofen - grillparty - 169 - treuemonat | 55 | 286_rt_raketenofen_grillparty_169 | | 287 | cdu - maaßen - merz - georg - friedrich | 77 | 287_cdu_maaßen_merz_georg | | 288 | nazis - ukrainischen - bataillon - ukraine - ukrainische | 163 | 288_nazis_ukrainischen_bataillon_ukraine | | 289 | todesfälle - covid - übersterblichkeit - fallsterblichkeit - 2020 | 121 | 289_todesfälle_covid_übersterblichkeit_fallsterblichkeit | | 290 | münchen - straße - friedlich - städten - montagsspaziergang | 171 | 290_münchen_straße_friedlich_städten | | 291 | akku - polymer - schlaufe - wanderns - aufzuhängen | 40 | 291_akku_polymer_schlaufe_wanderns | | 292 | migranten - maca - melilla - bevölkerungsaustausch - informationsstelle | 116 | 292_migranten_maca_melilla_bevölkerungsaustausch | | 293 | assange - julian - wikileaks - wasserfilter - julianassange | 46 | 293_assange_julian_wikileaks_wasserfilter | | 294 | arbeitgeber - inbreeding - köchin - coefficient - unbezahlt | 83 | 294_arbeitgeber_inbreeding_köchin_coefficient | | 295 | rassismus - hautfarbe - weißer - white - schwarze | 76 | 295_rassismus_hautfarbe_weißer_white | | 296 | lauterbach - karl - bachelor - kickl - herbert | 56 | 296_lauterbach_karl_bachelor_kickl | | 297 | nachtragshaushalt - milliarden - schulden - euro - vermögensregister | 131 | 297_nachtragshaushalt_milliarden_schulden_euro | | 298 | vorzubauen - rechtzeitig - zensurfreien - tragen - auf1 | 40 | 298_vorzubauen_rechtzeitig_zensurfreien_tragen | | 299 | thurner - catherine - catherines - kanalinfo - sendungen | 71 | 299_thurner_catherine_catherines_kanalinfo | | 300 | hinterlegte - google - telegram - zensiert - unzensiert | 54 | 300_hinterlegte_google_telegram_zensiert | | 301 | teiegram - rüber - aufgerollt - verspreche - gleichgeschaltet | 41 | 301_teiegram_rüber_aufgerollt_verspreche | | 302 | protest - patriotische - straße - symbol - demonstration | 147 | 302_protest_patriotische_straße_symbol | | 303 | omikron - variante - südafrika - omicron - coronavirus | 186 | 303_omikron_variante_südafrika_omicron | | 304 | gates - bill - melinda - epstein - leise | 44 | 304_gates_bill_melinda_epstein | | 305 | brave - google - here - search - landgericht | 43 | 305_brave_google_here_search | | 306 | rauch - gesundheitsminister - johannes - impfpflicht - gefängnisstrafe | 110 | 306_rauch_gesundheitsminister_johannes_impfpflicht | | 307 | freundinnen - braun - roman - freunde - curcuma | 44 | 307_freundinnen_braun_roman_freunde | | 308 | verharmloser - hetzern - lügnern - stopfen - geschichtsausgabe | 106 | 308_verharmloser_hetzern_lügnern_stopfen | | 309 | blair - rodriguez - his - donald - mcrae | 88 | 309_blair_rodriguez_his_donald | | 310 | französisch - wisnewski - dreisprachig - philipp - wien | 259 | 310_französisch_wisnewski_dreisprachig_philipp | | 311 | peters - stew - show - by - exposed | 115 | 311_peters_stew_show_by | | 312 | italien - meloni - draghi - regionalwahlen - giorgia | 80 | 312_italien_meloni_draghi_regionalwahlen | | 313 | inflation - ezb - geldpolitik - zentralbank - zinsen | 73 | 313_inflation_ezb_geldpolitik_zentralbank | | 314 | rt - de - browser - inhalte - telegram | 58 | 314_rt_de_browser_inhalte | | 315 | locals - duran - month - reviews - attacks | 86 | 315_locals_duran_month_reviews | | 316 | hanni - mückstein - putzt - schuldiges - mitgetragen | 53 | 316_hanni_mückstein_putzt_schuldiges | | 317 | laterne - ladung - de22830654080004273567 - spendenkonto - mediakanälen | 71 | 317_laterne_ladung_de22830654080004273567_spendenkonto | | 318 | haritaki - chip - implantate - roboter - maschine | 41 | 318_haritaki_chip_implantate_roboter | | 319 | haft - ballweg - remo - gefängnis - pawlowski | 121 | 319_haft_ballweg_remo_gefängnis | | 320 | chemikalien - pfas - umwelt - tabletten - esbit | 68 | 320_chemikalien_pfas_umwelt_tabletten | | 321 | corona - schilling - 22 - mayer - oe24 | 54 | 321_corona_schilling_22_mayer | | 322 | greetings - patriots - personal - go - my | 76 | 322_greetings_patriots_personal_go | | 323 | alc - nano - tatsachenbehauptungen - facebook - 0159 | 63 | 323_alc_nano_tatsachenbehauptungen_facebook | | 324 | demos - spaziergang - angemeldete - spaziergänge - spazieren | 119 | 324_demos_spaziergang_angemeldete_spaziergänge | | 325 | busfahrer - linz - bus - blue - beam | 58 | 325_busfahrer_linz_bus_blue | | 326 | lichte - dreibändige - reinkarnation - erkenntnisquelle - sinnzusammenhänge | 37 | 326_lichte_dreibändige_reinkarnation_erkenntnisquelle | | 327 | marktplatz - rathaus - satire - zensursicheren - förderer | 63 | 327_marktplatz_rathaus_satire_zensursicheren | | 328 | gott - jesus - gottes - huxley - fürchtete | 61 | 328_gott_jesus_gottes_huxley | | 329 | sönnichsen - freispruch - amtsanmaßung - prozess - andreas | 107 | 329_sönnichsen_freispruch_amtsanmaßung_prozess | | 330 | gasheizung - wetterfesten - wintergärten - stromunabhängiges - lüftung | 53 | 330_gasheizung_wetterfesten_wintergärten_stromunabhängiges | | 331 | budapest - bettinalube - ungarn - telegramzur - ignazbearth | 52 | 331_budapest_bettinalube_ungarn_telegramzur | | 332 | discounts - buy - rvm - order - get | 44 | 332_discounts_buy_rvm_order | | 333 | tragegriffe - anheben - grauem - pulverbeschichtetem - versetzen | 41 | 333_tragegriffe_anheben_grauem_pulverbeschichtetem | | 334 | live - streamen - twitch - lbry - kanälen | 52 | 334_live_streamen_twitch_lbry | | 335 | hommage - video - imad - karim - film | 132 | 335_hommage_video_imad_karim | | 336 | sellner - monatlich - postfach - monero - vorab | 64 | 336_sellner_monatlich_postfach_monero | | 337 | demos - demo - suchst - aktualisierter - kalender | 48 | 337_demos_demo_suchst_aktualisierter | | 338 | auf1 - wenko - vivien - vogt - janotka | 60 | 338_auf1_wenko_vivien_vogt | | 339 | tschentscher - hamburg - impfstatus - bürgermeister - ungeimpften | 40 | 339_tschentscher_hamburg_impfstatus_bürgermeister | | 340 | lenkrollen - größtmögliche - mobilität - stabile - lieferbar | 49 | 340_lenkrollen_größtmögliche_mobilität_stabile | | 341 | helping - brighteon - labeling - contamination - avoidance | 36 | 341_helping_brighteon_labeling_contamination | | 342 | stiefel - tragegefühl - insulation - innenschuh - obermaterial | 36 | 342_stiefel_tragegefühl_insulation_innenschuh | | 343 | covid - ivermectin - 19 - remdesivir - patienten | 183 | 343_covid_ivermectin_19_remdesivir | | 344 | klinik - autokraten - patienten - gestörten - geistig | 60 | 344_klinik_autokraten_patienten_gestörten | | 345 | zelensky - zelenskyy - präsident - ukrainische - zemmour | 117 | 345_zelensky_zelenskyy_präsident_ukrainische | | 346 | infrastruktur - feuerwehr - kritische - omikron - grundversorgung | 71 | 346_infrastruktur_feuerwehr_kritische_omikron | | 347 | biden - joe - us - präsident - burns | 177 | 347_biden_joe_us_präsident | | 348 | streik - dich - organisiertes - streikpotenzial - profilnamen | 55 | 348_streik_dich_organisiertes_streikpotenzial | | 349 | rabbit - research - jesionek - reinhard - orf | 72 | 349_rabbit_research_jesionek_reinhard | | 350 | ausgerufen - katastrophenfall - flutkatastrophe - feuerwehr - neuseeland | 99 | 350_ausgerufen_katastrophenfall_flutkatastrophe_feuerwehr | | 351 | geburtshaus - schwangere - schwangerschaft - schwangeren - frauen | 46 | 351_geburtshaus_schwangere_schwangerschaft_schwangeren | | 352 | hierfür - petroleumheizung - folgende - vorteile - alternative | 40 | 352_hierfür_petroleumheizung_folgende_vorteile | | 353 | wiederzuentdecken - unzulänglichkeiten - gelegenheiten - verborgene - fortschritt | 35 | 353_wiederzuentdecken_unzulänglichkeiten_gelegenheiten_verborgene | | 354 | australien - überschwemmungen - sydney - erträge - aktuelles | 66 | 354_australien_überschwemmungen_sydney_erträge | | 355 | kapitalismus - marktwirtschaft - umsatzeinbrüche - sklaven - widerlegbar | 69 | 355_kapitalismus_marktwirtschaft_umsatzeinbrüche_sklaven | | 356 | kaffee - classic - stanley - pour - guayusa | 49 | 356_kaffee_classic_stanley_pour | | 357 | regierung - sperrung - bundesregierung - neujahrsruhe - schlimmer | 119 | 357_regierung_sperrung_bundesregierung_neujahrsruhe | | 358 | vitamin - bluthochdruck - kreislauf - diabetes - herzinsuffizienz | 35 | 358_vitamin_bluthochdruck_kreislauf_diabetes | | 359 | soundbite - demonstrationen - thema2 - politik3 - corona | 67 | 359_soundbite_demonstrationen_thema2_politik3 | | 360 | zahnpulver - calcium - lavera - aldi - kassen | 39 | 360_zahnpulver_calcium_lavera_aldi | | 361 | betet - wien - cheryl - curb - enthusiasm | 50 | 361_betet_wien_cheryl_curb | | 362 | stromnetz - stromversorgung - stromnetze - kadri - energiekommissarin | 61 | 362_stromnetz_stromversorgung_stromnetze_kadri | | 363 | bubble - leiberl - komm - zeug - fesche | 36 | 363_bubble_leiberl_komm_zeug | | 364 | nato - stoltenberg - kampfjets - generalsekretär - lieferung | 175 | 364_nato_stoltenberg_kampfjets_generalsekretär | | 365 | disney - hersh - kelten - seymour - walt | 116 | 365_disney_hersh_kelten_seymour | | 366 | intensivbetten - betten - intensivstationen - divi - patienten | 96 | 366_intensivbetten_betten_intensivstationen_divi | | 367 | fluchtrucksack - nannten - spülmaschinenfest - bpafrei - plane | 67 | 367_fluchtrucksack_nannten_spülmaschinenfest_bpafrei | | 368 | greetings - patriots - personal - go - my | 44 | 368_greetings_patriots_personal_go | | 369 | aramid - handschuhe - handrückenbereich - handfläche - leder | 34 | 369_aramid_handschuhe_handrückenbereich_handfläche | | 370 | arabien - saudi - yuan - indien - dollar | 34 | 370_arabien_saudi_yuan_indien | | 371 | schiessler - impfpflicht - schöfbeck - geiselhaft - bkk | 194 | 371_schiessler_impfpflicht_schöfbeck_geiselhaft | | 372 | videokanal - rebell - nachrichtenkanal - aufklärungsvideos - hilfreiche | 70 | 372_videokanal_rebell_nachrichtenkanal_aufklärungsvideos | | 373 | leer - vollkorn - lebensmitteldiscounter - preta - dosenbrot | 43 | 373_leer_vollkorn_lebensmitteldiscounter_preta | | 374 | adidas - ikea - russland - unternehmen - sanktionen | 142 | 374_adidas_ikea_russland_unternehmen | | 375 | kommission - leyen - helping - brighteon - bnt162b2 | 57 | 375_kommission_leyen_helping_brighteon | | 376 | funkgeräte - limitierung - verschlüsselung - abhörsicher - ausstatten | 46 | 376_funkgeräte_limitierung_verschlüsselung_abhörsicher | | 377 | powerstation - stromvorrat - abrufen - jeglichen - speichern | 34 | 377_powerstation_stromvorrat_abrufen_jeglichen | | 378 | alzheimer - wehren - erfahren - demenz - impf | 39 | 378_alzheimer_wehren_erfahren_demenz | | 379 | newsmax - mail - bastyon - neuigkeiten - newsletter | 45 | 379_newsmax_mail_bastyon_neuigkeiten | | 380 | geräuschlose - hervorzuheben - innenräumen - profi - verwendung | 34 | 380_geräuschlose_hervorzuheben_innenräumen_profi | | 381 | lied - song - musik - stangl - mambo | 50 | 381_lied_song_musik_stangl | | 382 | donbass - bbc - kiew - ukrainische - armee | 134 | 382_donbass_bbc_kiew_ukrainische | | 383 | niedersachsen - aktionen - freieniedersachsen - übersicht - info | 50 | 383_niedersachsen_aktionen_freieniedersachsen_übersicht | | 384 | raphael - bonelli - bauchgefühle - nützen - entstehen | 61 | 384_raphael_bonelli_bauchgefühle_nützen | | 385 | valdai - club - lügenpresse - ladungen - vorträgen | 53 | 385_valdai_club_lügenpresse_ladungen | | 386 | edeka - produkte - hersteller - regale - händler | 77 | 386_edeka_produkte_hersteller_regale | | 387 | ukrainischen - ukrainische - freiwillige - soldaten - söldner | 181 | 387_ukrainischen_ukrainische_freiwillige_soldaten | | 388 | feldhose - slim - fit - bdu - teesar | 51 | 388_feldhose_slim_fit_bdu | | 389 | bp - nährwerte - süß - norwegischen - schmeckt | 33 | 389_bp_nährwerte_süß_norwegischen | | 390 | eu - verschlüsselte - rtl - google - richterbund | 39 | 390_eu_verschlüsselte_rtl_google | | 391 | salman - saudi - mohammed - schwert - arabien | 38 | 391_salman_saudi_mohammed_schwert | | 392 | übersterblichkeit - senioren - toten - destatis - 2020 | 73 | 392_übersterblichkeit_senioren_toten_destatis | | 393 | fahrer - russland - lkw - sanktionen - russia | 112 | 393_fahrer_russland_lkw_sanktionen | | 394 | kadyrow - putin - kompromiss - kiew - putins | 183 | 394_kadyrow_putin_kompromiss_kiew | | 395 | grammy - satanische - frida - hollywood - show | 59 | 395_grammy_satanische_frida_hollywood | | 396 | billa - edelmetalle - dir - unverbindlichen - sponsor | 59 | 396_billa_edelmetalle_dir_unverbindlichen | | 397 | lion - media - de32100110012624879184 - kontoverbindung - inhaber | 88 | 397_lion_media_de32100110012624879184_kontoverbindung | | 398 | verbinde - punkte - auf1 - negativbewertungen - attacken | 129 | 398_verbinde_punkte_auf1_negativbewertungen | | 399 | filter - wasserfilter - hohlfaser - verschmutzten - zuverlässigste | 33 | 399_filter_wasserfilter_hohlfaser_verschmutzten | | 400 | sahara - staub - schwermetalle - laboranalyse - magnetisch | 73 | 400_sahara_staub_schwermetalle_laboranalyse | | 401 | deutschland - medien - german - komlosy - aufhalten | 248 | 401_deutschland_medien_german_komlosy | | 402 | orf - beschwerde - verharmlost - formelle - nebenwirkungen | 80 | 402_orf_beschwerde_verharmlost_formelle | | 403 | mutter - kindstod - baby - babys - plötzliche | 57 | 403_mutter_kindstod_baby_babys | | 404 | doku - medien - zuschauer - massenmedien - schreiben | 253 | 404_doku_medien_zuschauer_massenmedien | | 405 | anwälte - bundesverfassungsgericht - karlsruhe - aufklärung - demonstration | 50 | 405_anwälte_bundesverfassungsgericht_karlsruhe_aufklärung | | 406 | kabellos - integriertes - digitalkamera - spritzwassergeschütztes - tablet | 33 | 406_kabellos_integriertes_digitalkamera_spritzwassergeschütztes | | 407 | protein - spike - bourla - distanzierte - mrna | 64 | 407_protein_spike_bourla_distanzierte | | 408 | eu - euro - europa - ahu - steuer | 139 | 408_eu_euro_europa_ahu | | 409 | obdachlose - obdachlosen - bahnsteigen - 3g - koblenz | 38 | 409_obdachlose_obdachlosen_bahnsteigen_3g | | 410 | odysee - gettr - youtube - anderen - kanäle | 32 | 410_odysee_gettr_youtube_anderen | | 411 | kettle - kelly - sturmkanne - edelstahl - original | 32 | 411_kettle_kelly_sturmkanne_edelstahl | | 412 | polizisten - staatsbürgerlichen - wertekultur - polizei - freiheitlich | 95 | 412_polizisten_staatsbürgerlichen_wertekultur_polizei | | 413 | ami - bestellung - eingeben - bestellen - home | 48 | 413_ami_bestellung_eingeben_bestellen | | 414 | atomkraft - teekessel - atomkraftwerke - petromax - habeck | 104 | 414_atomkraft_teekessel_atomkraftwerke_petromax | | 415 | fed - notenbank - ezb - leitzins - zinsen | 43 | 415_fed_notenbank_ezb_leitzins | | 416 | lockdown - lockdowns - willkommensgeld - überaltert - geprüfte | 97 | 416_lockdown_lockdowns_willkommensgeld_überaltert | | 417 | taylor - gunnar - album - sänger - musik | 44 | 417_taylor_gunnar_album_sänger | | 418 | de34 - 9544 - 7016 - 9466 - genodef1m03 | 47 | 418_de34_9544_7016_9466 | | 419 | bezahlt - selleriesaft - erfolgreich - strohmeier - natascha | 119 | 419_bezahlt_selleriesaft_erfolgreich_strohmeier | | 420 | dänemark - omikron - exxpress - corona - maßnahmen | 94 | 420_dänemark_omikron_exxpress_corona | | 421 | lockdown - gaststätten - niederlande - mallorca - impfpflichtgesetzes | 51 | 421_lockdown_gaststätten_niederlande_mallorca | | 422 | idealism - prevails - gräftner - spiritualität - nelles | 90 | 422_idealism_prevails_gräftner_spiritualität | | 423 | stew - content - episodes - shedding - treatment | 32 | 423_stew_content_episodes_shedding | | 424 | selbstreinigend - alleskönner - absoluter - wasserfilter - extrem | 32 | 424_selbstreinigend_alleskönner_absoluter_wasserfilter | | 425 | bakterien - pilzeauch - viren - mundspülungen - gurgeln | 59 | 425_bakterien_pilzeauch_viren_mundspülungen | | 426 | nehammer - wahlauftakt - kärnten - övp - kanzler | 84 | 426_nehammer_wahlauftakt_kärnten_övp | | 427 | live - streamen - lbry - schweiz - tages | 46 | 427_live_streamen_lbry_schweiz | | 428 | humor - lustiges - amüsanten - lachen - schmunzeln | 52 | 428_humor_lustiges_amüsanten_lachen | | 429 | sputnik - rt - talpa - rtl - fusion | 83 | 429_sputnik_rt_talpa_rtl | | 430 | vonovia - wohnungen - neubau - immobilien - zinsen | 136 | 430_vonovia_wohnungen_neubau_immobilien | | 431 | vordenken - mitdenken - nachdenken - hellsten - folge | 177 | 431_vordenken_mitdenken_nachdenken_hellsten | | 432 | dr - kistel - spreely - puresocialnetwork - pinterest | 69 | 432_dr_kistel_spreely_puresocialnetwork | | 433 | myokarditis - moderna - impfung - mrna - studie | 113 | 433_myokarditis_moderna_impfung_mrna | | 434 | impfnebenwirkungen - daten - fälle - impfschäden - impfung | 191 | 434_impfnebenwirkungen_daten_fälle_impfschäden | | 435 | blackout - rwe - strom - stromausfall - eex | 125 | 435_blackout_rwe_strom_stromausfall | | 436 | euro - diäten - simson - gehalt - monat | 34 | 436_euro_diäten_simson_gehalt | | 437 | spenden - armut - stillman - wohlstand - warburg | 86 | 437_spenden_armut_stillman_wohlstand | | 438 | ärzte - impfpflicht - impfung - mediziner - szekeres | 245 | 438_ärzte_impfpflicht_impfung_mediziner | | 439 | rfid - aktiendepot - geldbörsen - kredit - micro | 50 | 439_rfid_aktiendepot_geldbörsen_kredit | | 440 | führich - thurner - catherine - interview - website | 233 | 440_führich_thurner_catherine_interview | | 441 | frauentag - feministische - frauen - weltfrauentag - geschlechtergerechtigkeit | 52 | 441_frauentag_feministische_frauen_weltfrauentag | | 442 | feuerstahl - zündmittel - funktionsfähiges - feuer - einfaches | 49 | 442_feuerstahl_zündmittel_funktionsfähiges_feuer | | 443 | müller - mittelerde - joachim - fairtalk - hans | 58 | 443_müller_mittelerde_joachim_fairtalk | | 444 | verbrenner - 2035 - eu - pkw - parlament | 138 | 444_verbrenner_2035_eu_pkw | | 445 | verordnungen - verfassungs - covid - gesetzeswidrig - gesetzwidrig | 137 | 445_verordnungen_verfassungs_covid_gesetzeswidrig | | 446 | staatsrechtler - versammlungsfreiheit - versammlung - artikel - verfassungswidrig | 58 | 446_staatsrechtler_versammlungsfreiheit_versammlung_artikel | | 447 | chip - neuralink - haut - chips - musk | 48 | 447_chip_neuralink_haut_chips | | 448 | bunker - geheimarmeen - afghanistan - nato - taliban | 233 | 448_bunker_geheimarmeen_afghanistan_nato | | 449 | jobsuche - jobplattform - füreinefreieimpfentscheidung - vollzeit - jobs | 40 | 449_jobsuche_jobplattform_füreinefreieimpfentscheidung_vollzeit | | 450 | kommissionspräsidentin - leyen - swiss - terhes - schweizer | 38 | 450_kommissionspräsidentin_leyen_swiss_terhes | | 451 | taifun - philippinen - rai - lagern - katastrophenfällewas | 54 | 451_taifun_philippinen_rai_lagern | | 452 | ernährung - vital - basis - mineralien - kapseln | 34 | 452_ernährung_vital_basis_mineralien | | 453 | maca - ladezeit - schneller - tragbarer - lfp | 53 | 453_maca_ladezeit_schneller_tragbarer | | 454 | matrix - seelenkräfte - film - doku - wachrütteln | 69 | 454_matrix_seelenkräfte_film_doku | | 455 | krankenschwester - binger - hospital - felicia - jasmin | 48 | 455_krankenschwester_binger_hospital_felicia | | 456 | bewusst - spielzeug - eigenverantwortlichkeit - wiedergegeben - manchem | 76 | 456_bewusst_spielzeug_eigenverantwortlichkeit_wiedergegeben | | 457 | wasser - kochen - kannen - kamineffektes - minderwertigere | 49 | 457_wasser_kochen_kannen_kamineffektes | | 458 | wien - strasse - olli - vermüllung - rotationseuropäer | 44 | 458_wien_strasse_olli_vermüllung | | 459 | injektion - england - jähriges - pfizer - todesfälle | 42 | 459_injektion_england_jähriges_pfizer | | 460 | profilnamen - emoji - schliess - platziert - eintragen | 36 | 460_profilnamen_emoji_schliess_platziert | | 461 | filterkaraffe - esprit - liefersituation - gleichwertigen - abweichen | 30 | 461_filterkaraffe_esprit_liefersituation_gleichwertigen | | 462 | demokratie - direkte - parteidiktatur - mitgliedsstaaten - lenin | 92 | 462_demokratie_direkte_parteidiktatur_mitgliedsstaaten | | 463 | gasspeicher - füllstand - gefüllt - prozent - bleschke | 40 | 463_gasspeicher_füllstand_gefüllt_prozent | | 464 | faktenchecker - faktenchecks - mediale - manipulation - behauptung | 123 | 464_faktenchecker_faktenchecks_mediale_manipulation | | 465 | nattokinase - natto - hergestellt - heilnatura - zusatzstofffrei | 53 | 465_nattokinase_natto_hergestellt_heilnatura | | 466 | wildgebieten - pump - selbstreinigend - alleskönner - absoluter | 29 | 466_wildgebieten_pump_selbstreinigend_alleskönner | | 467 | mindestlaufzeit - kaufverpflichtung - mtl - kündigungsfristen - platin | 29 | 467_mindestlaufzeit_kaufverpflichtung_mtl_kündigungsfristen | | 468 | ärztekammer - szekeres - ärzte - wohlfahrtsfonds - mitglieder | 97 | 468_ärztekammer_szekeres_ärzte_wohlfahrtsfonds | | 469 | kliniken - krankenhäuser - corona - freihaltepauschalen - patienten | 203 | 469_kliniken_krankenhäuser_corona_freihaltepauschalen | | 470 | ohio - chemikalien - entgleisung - vinylchlorid - giftigen | 48 | 470_ohio_chemikalien_entgleisung_vinylchlorid | | 471 | tierschutz - raubtiere - calhoun - tiere - massivsten | 62 | 471_tierschutz_raubtiere_calhoun_tiere | | 472 | thais - passantin - chasing - vergewaltigt - horse | 92 | 472_thais_passantin_chasing_vergewaltigt | | 473 | doorjammer - zhdanov - bienenprodukte - obiges - magnesium | 52 | 473_doorjammer_zhdanov_bienenprodukte_obiges | | 474 | youtbe - odyssee - rabbit - research - substack | 36 | 474_youtbe_odyssee_rabbit_research | | 475 | putin - feindbild - babel - marschiert - verhasst | 150 | 475_putin_feindbild_babel_marschiert | | 476 | faschismus - faschisten - schockiert - nazi - tobi | 93 | 476_faschismus_faschisten_schockiert_nazi | | 477 | warentest - ffp2 - stiftung - masken - atemwiderstand | 29 | 477_warentest_ffp2_stiftung_masken | | 478 | lichtgrüße - maya - lichtgrüsse - priesterin - engmaschiger | 52 | 478_lichtgrüße_maya_lichtgrüsse_priesterin | | 479 | trump - putin - staatschef - biden - donald | 85 | 479_trump_putin_staatschef_biden | | 480 | zoo - hannover - gesundheitsberufe - ärztekammer - impft | 75 | 480_zoo_hannover_gesundheitsberufe_ärztekammer | | 481 | agamben - giorgio - nachwort - essayband - sodenkamp | 31 | 481_agamben_giorgio_nachwort_essayband | | 482 | adnet - freedomday - grundrechte - kärnten - markel | 85 | 482_adnet_freedomday_grundrechte_kärnten | | 483 | saharastaub - schwefeldioxid - staub - sahara - sand | 34 | 483_saharastaub_schwefeldioxid_staub_sahara | | 484 | strafandrohungen - erforderlichkeit - ultima - durchsetzbar - ratio | 48 | 484_strafandrohungen_erforderlichkeit_ultima_durchsetzbar | | 485 | impfpflicht - impfzwanges - impfung - belogen - gesundheitssprecher | 132 | 485_impfpflicht_impfzwanges_impfung_belogen | | 486 | heizung - extra - petroleum - flammlöschautomatik - abschaltvorrichtung | 41 | 486_heizung_extra_petroleum_flammlöschautomatik | | 487 | nährstoffe - bioverfügbarkeit - liposomale - doppelschicht - phospholipid | 43 | 487_nährstoffe_bioverfügbarkeit_liposomale_doppelschicht | | 488 | marburg - äquatorialguinea - virus - südkorea - ausbruch | 57 | 488_marburg_äquatorialguinea_virus_südkorea | | 489 | 850 - mockmill - fc - funkgerät - willhaben | 75 | 489_850_mockmill_fc_funkgerät | | 490 | schritt - enteignen - exklusivem - geld - inflation | 152 | 490_schritt_enteignen_exklusivem_geld | | 491 | ecoflow - tragbare - elektrowerkzeuge - patentierter - haushaltsgeräte | 29 | 491_ecoflow_tragbare_elektrowerkzeuge_patentierter | | 492 | cdl - eier - haltbar - praxisbuch - chlordioxid | 75 | 492_cdl_eier_haltbar_praxisbuch | | 493 | erbil - iran - raketen - afghanen - abgefeuert | 42 | 493_erbil_iran_raketen_afghanen | | 494 | hebelverschluss - außenliegendem - spannring - schüttdichte - plombierbarem | 34 | 494_hebelverschluss_außenliegendem_spannring_schüttdichte | | 495 | einlagerungsdatum - notieren - vorratstonne - eingelagerte - erntejahr | 28 | 495_einlagerungsdatum_notieren_vorratstonne_eingelagerte | | 496 | milch - trocknungsprozess - wasserzugabe - abgepackt - dehydrierte | 35 | 496_milch_trocknungsprozess_wasserzugabe_abgepackt | | 497 | kaliningrad - domizil - tagesaktuell - subjektiv - informationsagentur | 70 | 497_kaliningrad_domizil_tagesaktuell_subjektiv | | 498 | plötz - tarp - bäcker - bonnell - app | 63 | 498_plötz_tarp_bäcker_bonnell | | 499 | 00 - rathaus - 18 - marktplatz - eintopfofen | 59 | 499_00_rathaus_18_marktplatz | | 500 | impfpflicht - ärzte - kliniken - exodus - impfung | 329 | 500_impfpflicht_ärzte_kliniken_exodus | | 501 | schröder - soros - busek - 6713 - aspkat2lxxx | 157 | 501_schröder_soros_busek_6713 | | 502 | windräder - ausbau - offshore - windkraftanlagen - 2030 | 56 | 502_windräder_ausbau_offshore_windkraftanlagen | | 503 | wi2612 - wien - w2612 - anhalt - querfurt | 64 | 503_wi2612_wien_w2612_anhalt | | 504 | 4200 - keramik - aussenbereiche - belüftete - piezozünder | 33 | 504_4200_keramik_aussenbereiche_belüftete | | 505 | erde - asteroid - komet - persei - giuliana | 57 | 505_erde_asteroid_komet_persei | | 506 | ngos - hochkorrupten - krankensystem - volksvermögen - gigantischen | 56 | 506_ngos_hochkorrupten_krankensystem_volksvermögen | | 507 | beete - säuerlichen - aromatischen - gärventilschon - karotten | 62 | 507_beete_säuerlichen_aromatischen_gärventilschon | | 508 | strafverfahren - freiheitsstrafe - günstigkeitsprinzip - athene - schilchegger | 35 | 508_strafverfahren_freiheitsstrafe_günstigkeitsprinzip_athene | | 509 | du - dir - ich - dich - uns | 372 | 509_du_dir_ich_dich | | 510 | putin - wladimir - ukraine - russland - putins | 542 | 510_putin_wladimir_ukraine_russland | | 511 | kardinal - müller - überwachungsstaat - bischof - gleichzuschalten | 75 | 511_kardinal_müller_überwachungsstaat_bischof | | 512 | wolff - reset - great - komplex - inklusiv | 43 | 512_wolff_reset_great_komplex | | 513 | akku - unersetzlichen - weltempfang - weltempfänger - universalradio | 43 | 513_akku_unersetzlichen_weltempfang_weltempfänger | | 514 | hervorzuheben - geräuschlose - jeglichen - innenräumen - profi | 28 | 514_hervorzuheben_geräuschlose_jeglichen_innenräumen | | 515 | trocknen100 - nüssen - strohmeier - rezepte - dörren | 97 | 515_trocknen100_nüssen_strohmeier_rezepte | | 516 | 3g - 2g - kontrollen - regel - nonfood | 49 | 516_3g_2g_kontrollen_regel | | 517 | weihnachtsgeschäft - einzelhandel - 2g - innenstädte - hde | 37 | 517_weihnachtsgeschäft_einzelhandel_2g_innenstädte | | 518 | laterne - piks - nieswandt - hill - trinity | 147 | 518_laterne_piks_nieswandt_hill | | 519 | apartheid - südafrika - erbarmungsloser - schäbiger - unmenschlichem | 53 | 519_apartheid_südafrika_erbarmungsloser_schäbiger | | 520 | geistesblitze - tangsworld - wir - bewusstsein - uns | 170 | 520_geistesblitze_tangsworld_wir_bewusstsein | | 521 | chlorella - chlordioxid - süßwasseralge - alge - rabatt | 69 | 521_chlorella_chlordioxid_süßwasseralge_alge | | 522 | abtreibungen - vagina - vulva - abtreibung - uterus | 44 | 522_abtreibungen_vagina_vulva_abtreibung | | 523 | schwab - klaus - harvard - cia - wef | 40 | 523_schwab_klaus_harvard_cia | | 524 | sturmfeuerzeug - legendäre - messinggehäuse - saucool - benzinfeuerzeug | 32 | 524_sturmfeuerzeug_legendäre_messinggehäuse_saucool | | 525 | musk - elon - samsung - galaxy - netflix | 45 | 525_musk_elon_samsung_galaxy | | 526 | biden - joe - hunter - trump - sohn | 77 | 526_biden_joe_hunter_trump | | 527 | brille - odysseylbry - videokanäle - 1031 - 0104 | 39 | 527_brille_odysseylbry_videokanäle_1031 | | 528 | widerstand - 0006 - 3506 - 1309 - at75 | 58 | 528_widerstand_0006_3506_1309 | | 529 | magnesium - vollstahlaxt - calvez - bilbo - magnesiummangel | 80 | 529_magnesium_vollstahlaxt_calvez_bilbo | | 530 | waldhäusl - wien - gottfried - asyl - landesrat | 46 | 530_waldhäusl_wien_gottfried_asyl | | 531 | impfdosen - antikörper - untersuchten - reißner - engineer | 70 | 531_impfdosen_antikörper_untersuchten_reißner | | 532 | transhumanismus - magnet - transhumanisten - stefan - auflage | 51 | 532_transhumanismus_magnet_transhumanisten_stefan | | 533 | audioanalysen - ___________ - spreaker - audioanalyse - mslive | 59 | 533_audioanalysen_____________spreaker_audioanalyse | | 534 | wissenschaftsforscher - scheingast - frauenkollektiv - rednerinnen - weish | 27 | 534_wissenschaftsforscher_scheingast_frauenkollektiv_rednerinnen | | 535 | warburg - schmid - cum - wehrpflicht - kaack | 82 | 535_warburg_schmid_cum_wehrpflicht | | 536 | ahrtal - flut - flutkatastrophe - krisenstab - katastrophe | 41 | 536_ahrtal_flut_flutkatastrophe_krisenstab | | 537 | stahlkappe - außensohle - fersenbereich - dämpfung - schaftabschluss | 32 | 537_stahlkappe_außensohle_fersenbereich_dämpfung | | 538 | poppel - patrick - grünen - grüne - poppels | 38 | 538_poppel_patrick_grünen_grüne | | 539 | hauptbahnhof - graz - kundgebung - 13 - reformation | 75 | 539_hauptbahnhof_graz_kundgebung_13 | | 540 | twitter - holter - anhörung - zensorin - congresswoman | 47 | 540_twitter_holter_anhörung_zensorin | | 541 | wärme - heizung - stundenlang - sorgt - wohlige | 43 | 541_wärme_heizung_stundenlang_sorgt | | 542 | platzbedarf - geringe - zubereitung - vorteil - lagerung | 27 | 542_platzbedarf_geringe_zubereitung_vorteil | | 543 | 4970 - at82 - 1843 - 4500 - gibaatwwxxx | 37 | 543_4970_at82_1843_4500 | | 544 | 2050 - info - direkt - ausgabe - durchblättern | 56 | 544_2050_info_direkt_ausgabe | | 545 | impfpflicht - gesetz - allgemeine - beschäftigte - gesetzentwurf | 260 | 545_impfpflicht_gesetz_allgemeine_beschäftigte | | 546 | flugobjekt - alaska - object - abgeschossen - papua | 105 | 546_flugobjekt_alaska_object_abgeschossen | | 547 | ilverfahren - concoy - losungen - berufsverbote - mobbing | 26 | 547_ilverfahren_concoy_losungen_berufsverbote | | 548 | dänemark - 83 - geboostert - 785 - omikron | 27 | 548_dänemark_83_geboostert_785 | | 549 | lagern - 10er - notwasserbeutel - katastrophenfälle - krisenvorsorgeartikel | 26 | 549_lagern_10er_notwasserbeutel_katastrophenfälle | | 550 | agrarminister - hunger - hungersnot - özdemir - getreide | 71 | 550_agrarminister_hunger_hungersnot_özdemir | | 551 | blasendurchbruch - rausausderblase - medienallianz - unheilige - fgh | 49 | 551_blasendurchbruch_rausausderblase_medienallianz_unheilige | | 552 | powerbank - silikonkappe - spritzwassergeschützte - charging - netzunabhängige | 37 | 552_powerbank_silikonkappe_spritzwassergeschützte_charging | | 553 | belgrad - serbien - kosovo - serbiens - serbische | 62 | 553_belgrad_serbien_kosovo_serbiens | | 554 | kilez - davos - ehrlich - captain - foryou | 187 | 554_kilez_davos_ehrlich_captain | | 555 | bewährt - fluchtrucksacklars - gestattete - urlaubs - krisengebieten | 87 | 555_bewährt_fluchtrucksacklars_gestattete_urlaubs | | 556 | moral - unmoral - miller - henry - erkünsteltes | 119 | 556_moral_unmoral_miller_henry | | 557 | drosten - wiesendanger - bmi - bußgeld - erzählt | 77 | 557_drosten_wiesendanger_bmi_bußgeld | | 558 | maskenpflicht - taxifahrer - maske - vermummungsverbot - tragen | 48 | 558_maskenpflicht_taxifahrer_maske_vermummungsverbot | | 559 | optimismus - durchdrücken - schlimm - ausgebombt - gm | 51 | 559_optimismus_durchdrücken_schlimm_ausgebombt | | 560 | putin - russland - westen - sanktionen - russischen | 207 | 560_putin_russland_westen_sanktionen | | 561 | getreidetonne - wasserfilter - tool - unpraktisch - säcken | 64 | 561_getreidetonne_wasserfilter_tool_unpraktisch | | 562 | nrg - piexon - jpx6 - protector - jet | 91 | 562_nrg_piexon_jpx6_protector | | 563 | ungeimpfter - impfpflicht - diskriminierung - freiheitsgrundrechte - gutachter | 69 | 563_ungeimpfter_impfpflicht_diskriminierung_freiheitsgrundrechte | | 564 | germany - my - from - greetings - patriots | 48 | 564_germany_my_from_greetings | | 565 | friedman - iran - stratfor - pakistan - corporation | 92 | 565_friedman_iran_stratfor_pakistan | | 566 | our - patriot - most - gourmet - sleepy | 26 | 566_our_patriot_most_gourmet | | 567 | illusionen - ausfahrt - verfassungsrechtlichen - gemeinderat - ausnahmezustand | 52 | 567_illusionen_ausfahrt_verfassungsrechtlichen_gemeinderat | | 568 | china - chinas - chinesische - cips - yuan | 147 | 568_china_chinas_chinesische_cips | | 569 | geldautomaten - bank - banken - bankfilialen - kunden | 131 | 569_geldautomaten_bank_banken_bankfilialen | | 570 | bgl - logistik - engelhardt - güterkraftverkehr - entsorgung | 70 | 570_bgl_logistik_engelhardt_güterkraftverkehr | | 571 | covid - 19 - myokarditis - studie - impfstoffen | 95 | 571_covid_19_myokarditis_studie | | 572 | kpa - akku - universalradio - weltempfänger - weltempfang | 61 | 572_kpa_akku_universalradio_weltempfänger | | 573 | raketenofen - brennbaren - raketenöfen - outdoorküche - multitalent | 26 | 573_raketenofen_brennbaren_raketenöfen_outdoorküche | | 574 | datei - stabildurchdenwandel - audio - download - euro | 46 | 574_datei_stabildurchdenwandel_audio_download | | 575 | vdfr - guardiola - pep - versager - bitcoin | 43 | 575_vdfr_guardiola_pep_versager | | 576 | balaton - deutschsprachigen - gorizia - csárda - gemeinschaft | 32 | 576_balaton_deutschsprachigen_gorizia_csárda | | 577 | pizza - bestsellerreihe - jahresrückblicke - erfolgsautor - predictive | 93 | 577_pizza_bestsellerreihe_jahresrückblicke_erfolgsautor | | 578 | gesundheitspersonal - lautstark - per - protestiert - ärztekammer | 38 | 578_gesundheitspersonal_lautstark_per_protestiert | | 579 | kekulé - halle - universität - virologen - dienstenthebung | 60 | 579_kekulé_halle_universität_virologen | | 580 | aktivisten - demonstrationen - gewalt - banner - waldhäusl | 82 | 580_aktivisten_demonstrationen_gewalt_banner | | 581 | stiefel - bandage - hartmann - trauma - videoclip | 42 | 581_stiefel_bandage_hartmann_trauma | | 582 | neutralität - bellen - van - österreich - schweiz | 64 | 582_neutralität_bellen_van_österreich | | 583 | laptop - donbass - gefunden - verlassenen - militärbase | 39 | 583_laptop_donbass_gefunden_verlassenen | | 584 | catherine - manuel - denk - denkbrief - gunnarkaiser | 68 | 584_catherine_manuel_denk_denkbrief | | 585 | hauptadresse - bc1q7xfc7ppuw5jwz77sy29txy0efwqnpxw70swgy6 - genodef1m03 - de34 - 9466 | 36 | 585_hauptadresse_bc1q7xfc7ppuw5jwz77sy29txy0efwqnpxw70swgy6_genodef1m03_de34 | | 586 | funkgeräte - limitierung - verschlüsselung - abhörsicher - ausstatten | 26 | 586_funkgeräte_limitierung_verschlüsselung_abhörsicher | | 587 | plan - kommst - lernst - sammlung - q74you | 42 | 587_plan_kommst_lernst_sammlung | | 588 | ausführliches - urteilsverkündung - dr - zhvanetskaya - pagitz | 81 | 588_ausführliches_urteilsverkündung_dr_zhvanetskaya | | 589 | 850 - funkgerät - fc - stabo - pmr | 36 | 589_850_funkgerät_fc_stabo | | 590 | russen - ukraine - russland - russischen - krieg | 534 | 590_russen_ukraine_russland_russischen | | 591 | ausnahmezustand - lauterbach - karl - palmer - normalität | 66 | 591_ausnahmezustand_lauterbach_karl_palmer | | 592 | allesaufdentisch - byoblu - dankefueralles - allesdichtmachen - bedanken | 80 | 592_allesaufdentisch_byoblu_dankefueralles_allesdichtmachen | | 593 | kontaktbeschränkungen - geimpfte - zusammenkünfte - genesene - tschentscher | 36 | 593_kontaktbeschränkungen_geimpfte_zusammenkünfte_genesene | | 594 | lampenöl - autark - petroleumlampen - reinheit - brennstoff | 25 | 594_lampenöl_autark_petroleumlampen_reinheit | | 595 | qfs - ripple - cftc - sec - windows | 67 | 595_qfs_ripple_cftc_sec | | 596 | schlafsack - schlafsackinneren - thermolite - geringem - isolierung | 25 | 596_schlafsack_schlafsackinneren_thermolite_geringem | | 597 | 11kg - mehrwöchiger - umrüstgasschlauch - propangasflaschen - widerstandsfähigen | 25 | 597_11kg_mehrwöchiger_umrüstgasschlauch_propangasflaschen | | 598 | verfassungswidrig - 81 - wissenschaftler - kontrollgruppe - wissenschaftlern | 72 | 598_verfassungswidrig_81_wissenschaftler_kontrollgruppe | | 599 | set - micro - weitaus - sturmlaterne - flamme | 76 | 599_set_micro_weitaus_sturmlaterne | | 600 | lichte - werk - menschheitsfragen - überlieferungen - gegebenheit | 31 | 600_lichte_werk_menschheitsfragen_überlieferungen | | 601 | kennedy - fauci - carlson - rosenberg - jr | 42 | 601_kennedy_fauci_carlson_rosenberg | | 602 | kubicki - rausaufdiestrasse - infektionsschutzgesetz - vergeltung - änderung | 69 | 602_kubicki_rausaufdiestrasse_infektionsschutzgesetz_vergeltung | | 603 | rpp - innere - präsentiert - barucker - bastian | 57 | 603_rpp_innere_präsentiert_barucker | | 604 | zdf - anne - bellut - nuoviso - lehrstunde | 44 | 604_zdf_anne_bellut_nuoviso | | 605 | vitamin - selbstverteidigungsschirm - selen - melatonin - thule | 118 | 605_vitamin_selbstverteidigungsschirm_selen_melatonin | | 606 | fpö - kickl - neuwahlen - nehammer - partei | 216 | 606_fpö_kickl_neuwahlen_nehammer | | 607 | rundbriefabo - körpergewicht - athletischen - bezogenes - hintergrundwissen | 90 | 607_rundbriefabo_körpergewicht_athletischen_bezogenes | | 608 | vpn - seil - restricted - republic - videoeinen | 44 | 608_vpn_seil_restricted_republic | | 609 | kostenlawine - schreyer - stoppen - klimaschutz - wirtschaftskrise | 209 | 609_kostenlawine_schreyer_stoppen_klimaschutz | | 610 | faszinierendsten - immunsystem - verteidigungslinie - ram - musik | 56 | 610_faszinierendsten_immunsystem_verteidigungslinie_ram | | 611 | ibrahim - abgeschoben - brokstedt - regionalzug - palästinenser | 43 | 611_ibrahim_abgeschoben_brokstedt_regionalzug | | 612 | rentner - rente - klimabonus - schlegel - 2030 | 86 | 612_rentner_rente_klimabonus_schlegel | | 613 | sanktionen - lossagen - kommmt - unwälzung - ruppig | 117 | 613_sanktionen_lossagen_kommmt_unwälzung | | 614 | mcdonald - wodka - filialen - produkte - böser | 56 | 614_mcdonald_wodka_filialen_produkte | | 615 | leitungswasser - trinkwasserqualität - maximale - wasserbar - gifte | 42 | 615_leitungswasser_trinkwasserqualität_maximale_wasserbar | | 616 | omikron - euro - variante - coronavirus - verdienen | 163 | 616_omikron_euro_variante_coronavirus | | 617 | hierfür - petroleumheizung - folgende - vorteile - alternative | 29 | 617_hierfür_petroleumheizung_folgende_vorteile | | 618 | impfpflicht - fdp - kubicki - mückstein - wolfgang | 99 | 618_impfpflicht_fdp_kubicki_mückstein | | 619 | medizinrecht - fachanwältin - bahner - beate - buches | 87 | 619_medizinrecht_fachanwältin_bahner_beate | | 620 | schweizer - wolldecke - armee - privatgebrauch - qualitätsbewusstsein | 32 | 620_schweizer_wolldecke_armee_privatgebrauch | | 621 | webkante - farbstreifen - originalvorgaben - schafwolle - fertigt | 24 | 621_webkante_farbstreifen_originalvorgaben_schafwolle | | 622 | jaspers - ulrike - marcel - grenze - felix | 71 | 622_jaspers_ulrike_marcel_grenze | | 623 | gor - odessa - amulette - rassadin - altera | 107 | 623_gor_odessa_amulette_rassadin | | 624 | lauterbach - karl - corona - maßnahmen - fdp | 139 | 624_lauterbach_karl_corona_maßnahmen | | 625 | stauraum - zeltvordach - schlafbereich - gepäck - laufruhiger | 40 | 625_stauraum_zeltvordach_schlafbereich_gepäck | | 626 | umerziehung - volkes - höchstwahrscheinlich - freimaurer - zulassungssystem | 30 | 626_umerziehung_volkes_höchstwahrscheinlich_freimaurer | | 627 | myokarditis - krankenhaus - australischer - notaufnahme - herzoperation | 66 | 627_myokarditis_krankenhaus_australischer_notaufnahme | | 628 | diskret - lebensmittelvorrat - lebensmittel - trinkgeld - esse | 61 | 628_diskret_lebensmittelvorrat_lebensmittel_trinkgeld | | 629 | mitgefühl - verlag - herausgegebene - essayband - nachwort | 109 | 629_mitgefühl_verlag_herausgegebene_essayband | | 630 | your - visit - family - kerzen - works | 39 | 630_your_visit_family_kerzen | | 631 | crowdbunker - wasserfilter - guardian - twitch - pumpenhub | 54 | 631_crowdbunker_wasserfilter_guardian_twitch | | 632 | brauche - liebe - du - abendgebet - transzendenz | 155 | 632_brauche_liebe_du_abendgebet | | 633 | heimatkurier - rundbrief - zensursicheren - förderer - telegramelite | 79 | 633_heimatkurier_rundbrief_zensursicheren_förderer | | 634 | respekt - 4970 - at82 - 1843 - 4500 | 69 | 634_respekt_4970_at82_1843 | | 635 | lichterspaziergang - autokorso - gleisdorf - natascha - merith | 83 | 635_lichterspaziergang_autokorso_gleisdorf_natascha | | 636 | löwenmamas - schulprojekt - loewenmamaspapas - versenden - bildungs | 35 | 636_löwenmamas_schulprojekt_loewenmamaspapas_versenden | | 637 | ovalmedia - spendet - verteiler - traudichzufragen - apolut | 45 | 637_ovalmedia_spendet_verteiler_traudichzufragen | | 638 | grüße - uttley - liebe - smokie - terry | 131 | 638_grüße_uttley_liebe_smokie | | 639 | satire - ironie - film - bedienungsanleitung - natörrlich | 53 | 639_satire_ironie_film_bedienungsanleitung | | 640 | russen - russischen - schule - russisch - lehrplan | 92 | 640_russen_russischen_schule_russisch | | 641 | euro - vorrichtung - alkoholtester - kellinghaus - akademiker | 92 | 641_euro_vorrichtung_alkoholtester_kellinghaus | | 642 | hyundai - stromgenerator - inverter - fortschrittlichen - mobil | 24 | 642_hyundai_stromgenerator_inverter_fortschrittlichen | | 643 | jaco - saatgut - anderwertig - zerstückelt - schriftlicher | 76 | 643_jaco_saatgut_anderwertig_zerstückelt | | 644 | polen - kriegsbetroffene - hilfsgütern - wirtschaftskriege - warschau | 73 | 644_polen_kriegsbetroffene_hilfsgütern_wirtschaftskriege | | 645 | kochen - windigstem - nassem - rekordzeit - kanne | 24 | 645_kochen_windigstem_nassem_rekordzeit | | 646 | tönnies - fleischbranche - versorgungssicherheit - fleisch - fleischproduzent | 40 | 646_tönnies_fleischbranche_versorgungssicherheit_fleisch | | 647 | selenskyj - berlusconi - vigano - fußballerische - erzbischof | 75 | 647_selenskyj_berlusconi_vigano_fußballerische | | 648 | verbinde - punkte - sunny - neugier - jugendlicher | 39 | 648_verbinde_punkte_sunny_neugier | | 649 | beneder - vfgh - verwaltungsstrafverfahren - mag - subjektqualität | 67 | 649_beneder_vfgh_verwaltungsstrafverfahren_mag | | 650 | defcon - pope - stufe - steps - death | 28 | 650_defcon_pope_stufe_steps | | 651 | zukunftskonferenz - armut - zuwanderung - intando - genpool | 84 | 651_zukunftskonferenz_armut_zuwanderung_intando | | 652 | unvermeidlich - welterklärung - schöpfungsganzen - lebenshilfen - aufbauend | 49 | 652_unvermeidlich_welterklärung_schöpfungsganzen_lebenshilfen | | 653 | hartkekse - trekkingbereich - epas - wassergehalt - tagesration | 23 | 653_hartkekse_trekkingbereich_epas_wassergehalt | | 654 | geigerzähler - radioaktiver - counter - strahlung - cm | 29 | 654_geigerzähler_radioaktiver_counter_strahlung | | 655 | heizgerätes - eingebaute - strahlungswärme - sauerstoffmangelsicherung - zündsicherung | 23 | 655_heizgerätes_eingebaute_strahlungswärme_sauerstoffmangelsicherung | | 656 | perestroika - globalisierung - westen - klassenkampf - ect | 117 | 656_perestroika_globalisierung_westen_klassenkampf | | 657 | 6713 - aspkat2lxxx - 0321 - 0058 - at50 | 27 | 657_6713_aspkat2lxxx_0321_0058 | | 658 | eu - wikinger - georgien - irakli - österreich | 131 | 658_eu_wikinger_georgien_irakli | | 659 | song - miriam - hope - banane - björn | 113 | 659_song_miriam_hope_banane | | 660 | cnn - producer - daughter - source - fantasizing | 50 | 660_cnn_producer_daughter_source | | 661 | generation - klimaterroristen - klima - culpa - bodenkultur | 64 | 661_generation_klimaterroristen_klima_culpa | | 662 | greene - gadde - mace - republikanerin - marjorie | 24 | 662_greene_gadde_mace_republikanerin | | 663 | dresden - frankfurt - sachsen - gedenkspaziergang - taunusanlage | 105 | 663_dresden_frankfurt_sachsen_gedenkspaziergang | | 664 | radikalisierung - bülow - pürstl - reul - fidji | 132 | 664_radikalisierung_bülow_pürstl_reul | | 665 | hifi - mikrowelle - kochherd - spannungs - frequenzschwankungen | 28 | 665_hifi_mikrowelle_kochherd_spannungs | | 666 | wiedergewinnung - mündigkeit - viralität - seminare - scenes | 29 | 666_wiedergewinnung_mündigkeit_viralität_seminare | | 667 | mittelerde - säulen - mittelerdetv - lädchen - tv | 36 | 667_mittelerde_säulen_mittelerdetv_lädchen | | 668 | geburtshilfe - krankenhaus - nrw - klinik - gynäkologie | 117 | 668_geburtshilfe_krankenhaus_nrw_klinik | | 669 | adventkalender - kerzen - verdauungsspaziergang - eisbaden - weihnachtsmarkt | 84 | 669_adventkalender_kerzen_verdauungsspaziergang_eisbaden | | 670 | fettleber - kg - lebensmittel - kaffee - gewogen | 24 | 670_fettleber_kg_lebensmittel_kaffee | | 671 | passierscheine - ausgangssperre - pflegepersonals - zdfheute - unangemeldete | 34 | 671_passierscheine_ausgangssperre_pflegepersonals_zdfheute | | 672 | valentine - erfindungen - space - kiyosaki - qr | 72 | 672_valentine_erfindungen_space_kiyosaki | | 673 | instrumenten - stew - content - op - activism | 53 | 673_instrumenten_stew_content_op | | 674 | schöning - heiko - verbrechen - seilschaften - enthüllungsbuch | 27 | 674_schöning_heiko_verbrechen_seilschaften | | 675 | ryanair - bmi - faa - aya - velázquez | 66 | 675_ryanair_bmi_faa_aya | | 676 | ganser - intendantin - schlesinger - rbb - daniele | 150 | 676_ganser_intendantin_schlesinger_rbb | | 677 | cibis - landgrebe - jonas - tögel - jobst | 34 | 677_cibis_landgrebe_jonas_tögel | | 678 | apple - app - verschenkt - iphone - distanzeritis | 58 | 678_apple_app_verschenkt_iphone | | 679 | hotels - rumänien - nonnemacher - brandenburg - ampeln | 35 | 679_hotels_rumänien_nonnemacher_brandenburg | | 680 | eier - vogelgrippe - hühner - eierfarm - hühnerfarmen | 28 | 680_eier_vogelgrippe_hühner_eierfarm | | 681 | rumble - mich - muttergesellschaft - impfstreik - bestellen | 41 | 681_rumble_mich_muttergesellschaft_impfstreik | | 682 | bismarck - politiker - kogler - werner - schönhausen | 50 | 682_bismarck_politiker_kogler_werner | | 683 | minderheit - rechtsextremismus - bewegung - gesellschaft - neutralität | 221 | 683_minderheit_rechtsextremismus_bewegung_gesellschaft | | 684 | stewpeters10 - your - destress - kryptonite - purchasing | 28 | 684_stewpeters10_your_destress_kryptonite | | 685 | adams - yasmine - motarjemi - nestlé - jerome | 66 | 685_adams_yasmine_motarjemi_nestlé | | 686 | söder - markus - kubicki - ministerpräsident - bayerns | 24 | 686_söder_markus_kubicki_ministerpräsident | | 687 | angesagteste - schickeria - de97100110012620193011 - paypal - betteln | 35 | 687_angesagteste_schickeria_de97100110012620193011_paypal | | 688 | balloon - abschüsse - ufo - ufos - kirby | 42 | 688_balloon_abschüsse_ufo_ufos | | 689 | lawrow - ukraine - russland - krim - putin | 266 | 689_lawrow_ukraine_russland_krim | | 690 | haushalt - einfachste - person - monat - set | 23 | 690_haushalt_einfachste_person_monat | | 691 | kireev - ukrainischen - soldaten - denis - getötet | 179 | 691_kireev_ukrainischen_soldaten_denis | | 692 | ordnungsamt - restaurantleiter - restaurant - 2g - mitarbeitenden | 24 | 692_ordnungsamt_restaurantleiter_restaurant_2g | | 693 | immunsystem - impfungen - impfstoff - stärken - impfung | 163 | 693_immunsystem_impfungen_impfstoff_stärken | | 694 | energetischen - alexanders - tagesenergie - zeitqualität - hinweise | 23 | 694_energetischen_alexanders_tagesenergie_zeitqualität | | 695 | drucksimulation - gewichtsdecke - hervorgerufene - schlafqualität - umarmung | 27 | 695_drucksimulation_gewichtsdecke_hervorgerufene_schlafqualität | | 696 | obama - cia - oun - friedensnobelpreis - kohn | 84 | 696_obama_cia_oun_friedensnobelpreis | | 697 | einreiben - speiseöl - öl - flaschen - lidl | 25 | 697_einreiben_speiseöl_öl_flaschen | | 698 | würrer - bildquelle - thumbnail - telegram - facebook | 47 | 698_würrer_bildquelle_thumbnail_telegram | | 699 | klarnamenpflicht - schneekanonen - aktivisten - kretschmer - hasskriminalität | 87 | 699_klarnamenpflicht_schneekanonen_aktivisten_kretschmer | | 700 | he - schrank - liefen - britney - monteur | 326 | 700_he_schrank_liefen_britney | | 701 | vollmilchpulver - basics - bio - ef - grundnahrungsmitteln | 27 | 701_vollmilchpulver_basics_bio_ef | | 702 | övp - sobotka - korruptionsuntersuchungsausschuss - befragungstag - ausgezeichnet | 72 | 702_övp_sobotka_korruptionsuntersuchungsausschuss_befragungstag | | 703 | co2 - privatjets - gates - handwerker - eigenhändig | 50 | 703_co2_privatjets_gates_handwerker | | 704 | watford - wandel - böttcher - 1012 - 0091 | 68 | 704_watford_wandel_böttcher_1012 | | 705 | frauen - asylheim - einsiedel - peutenhausen - untergebracht | 125 | 705_frauen_asylheim_einsiedel_peutenhausen | | 706 | hildegard - bingen - ephesus - abendgebete - jonathan | 24 | 706_hildegard_bingen_ephesus_abendgebete | | 707 | webinar - servustv - servus - stattfindet - suche | 76 | 707_webinar_servustv_servus_stattfindet | | 708 | gedichte - adventskalender - weihnachtszeit - weihnacht - geschichten | 41 | 708_gedichte_adventskalender_weihnachtszeit_weihnacht | | 709 | gewalt - ruppert - beleidigender - aufkommens - unsachlicher | 43 | 709_gewalt_ruppert_beleidigender_aufkommens | | 710 | salzburg - protestmarsch - mozartplatz - 12 - haslauer | 64 | 710_salzburg_protestmarsch_mozartplatz_12 | | 711 | schweden - sahin - ugur - positivrate - glaubhafte | 59 | 711_schweden_sahin_ugur_positivrate | | 712 | schuberts - lagemeldung - abonniert - körper - reinigungsprogramm | 26 | 712_schuberts_lagemeldung_abonniert_körper | | 713 | cyberangriff - meister - websites - israel - israelische | 54 | 713_cyberangriff_meister_websites_israel | | 714 | blackrock - benett - vermögensverwalter - glänzt - honecker | 31 | 714_blackrock_benett_vermögensverwalter_glänzt | | 715 | absatz - mydatamychoice - 22a - ärztliches - genesenennachweis | 26 | 715_absatz_mydatamychoice_22a_ärztliches | | 716 | ukrainische - nationalisten - roedik - kira - nazis | 60 | 716_ukrainische_nationalisten_roedik_kira | | 717 | soros - bolsonaro - xi - george - zivilisation | 57 | 717_soros_bolsonaro_xi_george | | 718 | florida - desantis - ron - theater - gouverneur | 66 | 718_florida_desantis_ron_theater | | 719 | aperio - sono - zusammenschnitt - videosicherung - eklig | 60 | 719_aperio_sono_zusammenschnitt_videosicherung | | 720 | eintopfofen - praktisch - holzgriff - grillrost - gulaschkanone | 46 | 720_eintopfofen_praktisch_holzgriff_grillrost | | 721 | aufklärungsvideos - hilfreiche - jeglicher - zensur - frei | 22 | 721_aufklärungsvideos_hilfreiche_jeglicher_zensur | | 722 | borrell - josep - zensur - außenbeauftragte - eu | 38 | 722_borrell_josep_zensur_außenbeauftragte | | 723 | wintergrillen - anziehungspunkt - hingucker - gartenparty - tannen | 22 | 723_wintergrillen_anziehungspunkt_hingucker_gartenparty | | 724 | löwenzahn - biologischem - broendegaarden - löwenzahnfelder - extrakts | 30 | 724_löwenzahn_biologischem_broendegaarden_löwenzahnfelder | | 725 | taiwan - china - taiwans - chinesen - chinesischen | 69 | 725_taiwan_china_taiwans_chinesen | | 726 | blackout - stromausfall - strom - blackouts - südafrika | 141 | 726_blackout_stromausfall_strom_blackouts | | 727 | rt - türkei - lira - verwaltungsstrafe - sanktionsmaßnahmen | 93 | 727_rt_türkei_lira_verwaltungsstrafe | | 728 | zukunftsfähigkeit - glauben - lähmt - wirtschaftsnationen - umfrage | 69 | 728_zukunftsfähigkeit_glauben_lähmt_wirtschaftsnationen | | 729 | bevorratung - expeditionsbereich - speziellen - katastrophenschutz - bewährt | 22 | 729_bevorratung_expeditionsbereich_speziellen_katastrophenschutz | | 730 | matthie - größere - erhitzen - personengruppen - angesagteste | 50 | 730_matthie_größere_erhitzen_personengruppen | | 731 | sicherheitsgefühl - selbstverteidigungsschirm - abwehrstock - multifunktionsschirm - unverdächtigen | 34 | 731_sicherheitsgefühl_selbstverteidigungsschirm_abwehrstock_multifunktionsschirm | | 732 | katastrophenforscher - türkis - goersch - akzep - schäuble | 59 | 732_katastrophenforscher_türkis_goersch_akzep | | 733 | verbrechen - entschuldigung - folter - verbrecher - samurai | 117 | 733_verbrechen_entschuldigung_folter_verbrecher | | 734 | gasheizofen - belüfteten - wettergeschützten - innenbereich - außenbereich | 22 | 734_gasheizofen_belüfteten_wettergeschützten_innenbereich | | 735 | gcr - schwab - nikola - klaus - wiedereröffnung | 75 | 735_gcr_schwab_nikola_klaus | | 736 | wasserfilter - guardian - modernste - purifier - preisgekrönte | 22 | 736_wasserfilter_guardian_modernste_purifier | | 737 | lieferumfang - enthalten - platzsparenden - gasflaschenaufstellraum - gasschlauch | 44 | 737_lieferumfang_enthalten_platzsparenden_gasflaschenaufstellraum | | 738 | kaniber - bauern - agrarministerin - hilf - michaela | 22 | 738_kaniber_bauern_agrarministerin_hilf | | 739 | tesla - ladesäulen - lithium - cent - erhöht | 25 | 739_tesla_ladesäulen_lithium_cent | | 740 | taiwan - china - südchinesischen - philippinen - vergesst | 38 | 740_taiwan_china_südchinesischen_philippinen | | 741 | regierung - regime - politikern - politiker - verantwortlichen | 257 | 741_regierung_regime_politikern_politiker | | 742 | diplomaten - ausweisung - schamalow - personae - roslesinforg | 73 | 742_diplomaten_ausweisung_schamalow_personae | | 743 | freiheit - arendt - hannah - zwängen - abwesenheit | 41 | 743_freiheit_arendt_hannah_zwängen | | 744 | szekeres - ärztekammer - mediziner - ärztekammerpräsident - ärzten | 52 | 744_szekeres_ärztekammer_mediziner_ärztekammerpräsident | | 745 | frisierter - veruntreut - umverteilt - manipulierter - epochaler | 87 | 745_frisierter_veruntreut_umverteilt_manipulierter | | 746 | magdeburg - karneval - keszthely - deutschsprachige - umzug | 25 | 746_magdeburg_karneval_keszthely_deutschsprachige | | 747 | sönnichsen - freigesprochen - taiwan - prof - andreas | 52 | 747_sönnichsen_freigesprochen_taiwan_prof | | 748 | straßen - raus - straße - kritischemasse - kanntet | 65 | 748_straßen_raus_straße_kritischemasse | | 749 | wolff - ernst - groissböck - kongress - gespräch | 103 | 749_wolff_ernst_groissböck_kongress | | 750 | demokratie - kanzlerstuhl - kretschmar - sozialismus - pankalla | 50 | 750_demokratie_kanzlerstuhl_kretschmar_sozialismus | | 751 | sonnenblumenöl - speiseöl - metro - aldi - sonnenblumen | 35 | 751_sonnenblumenöl_speiseöl_metro_aldi | | 752 | stinkstoffen - abwehrspray - herkömmlichen - angel - pfeffersprays | 21 | 752_stinkstoffen_abwehrspray_herkömmlichen_angel | | 753 | q74you - dein - anfänger - anrollen - schnelleinstieg | 58 | 753_q74you_dein_anfänger_anrollen | | 754 | salzburg - pucher - aktivisten - landtagswahl - klimaaktivisten | 72 | 754_salzburg_pucher_aktivisten_landtagswahl | | 755 | nattokinase - wundermittel - unserige - interviewgäste - geposteten | 38 | 755_nattokinase_wundermittel_unserige_interviewgäste | | 756 | bergholz - variante - fauci - pandemie - ghana | 197 | 756_bergholz_variante_fauci_pandemie | | 757 | mel - and - irlmaier - truth - musician | 91 | 757_mel_and_irlmaier_truth | | 758 | migranten - asylwerber - klimabonus - migrationswaffe - ausländer | 134 | 758_migranten_asylwerber_klimabonus_migrationswaffe | | 759 | étterem - bearth - anmeldung - balaton - kongress | 56 | 759_étterem_bearth_anmeldung_balaton | | 760 | fitz - lisa - winnetou - fake - swr | 98 | 760_fitz_lisa_winnetou_fake | | 761 | ag - preise - discounter - teurer - supermarkt | 64 | 761_ag_preise_discounter_teurer | | 762 | laune - schlafsprachnachricht - leiberl - mäckle - zeug | 45 | 762_laune_schlafsprachnachricht_leiberl_mäckle | | 763 | germany - stew - devastated - content - latest | 49 | 763_germany_stew_devastated_content | | 764 | florian - karlsruhe - justizanstalt - harbarth - zkm | 21 | 764_florian_karlsruhe_justizanstalt_harbarth | | 765 | mfg - stellungnahmemöglichkeit - opt - abmeldung - vollstreckungs | 36 | 765_mfg_stellungnahmemöglichkeit_opt_abmeldung | | 766 | österreich - impfpflicht - covidmassnahmen - strafen - epidemische | 135 | 766_österreich_impfpflicht_covidmassnahmen_strafen | | 767 | österreich - felssturz - abriegelung - bergretter - ungemütliches | 43 | 767_österreich_felssturz_abriegelung_bergretter | | 768 | sears - jp - brotbackautomat - brot - komiker | 66 | 768_sears_jp_brotbackautomat_brot | | 769 | day - freedom - käse - abgesagt - tischen | 65 | 769_day_freedom_käse_abgesagt | | 770 | gauck - lebensglück - lebensfreude - frieren - bundespräsident | 32 | 770_gauck_lebensglück_lebensfreude_frieren | | 771 | pasta - mitgründer - schriftlich - umfangreiche - analyst | 29 | 771_pasta_mitgründer_schriftlich_umfangreiche | | 772 | proxy - 0550 - sparkassede88 - 6010 - twitterusa | 35 | 772_proxy_0550_sparkassede88_6010 | | 773 | ki - chatgpt - altman - halbstaatlichen - zensurmaschine | 76 | 773_ki_chatgpt_altman_halbstaatlichen | | 774 | abend - nacht - denkt - morgen - grüße | 122 | 774_abend_nacht_denkt_morgen | | 775 | copyright - use - fair - otherwise - materials | 21 | 775_copyright_use_fair_otherwise | | 776 | ecoflow - batteriegespeister - leiseste - helfer - stromgenerator | 45 | 776_ecoflow_batteriegespeister_leiseste_helfer | | 777 | spazierten - veranstalteten - bundeskazleramt - deponieren - andauernde | 47 | 777_spazierten_veranstalteten_bundeskazleramt_deponieren | | 778 | dynamo - kurbel - powerstation - stromgenerierung - ausflüge | 42 | 778_dynamo_kurbel_powerstation_stromgenerierung | | 779 | elektrosmog - gaskartuschen - umgebung - sinus - tester | 51 | 779_elektrosmog_gaskartuschen_umgebung_sinus | | 780 | körperzellen - lebensgefahr - hauptbestandteile - comirnaty - bahner | 54 | 780_körperzellen_lebensgefahr_hauptbestandteile_comirnaty | | 781 | greetings - patriots - personal - go - my | 24 | 781_greetings_patriots_personal_go | | 782 | nato - ukraine - krim - kiew - bereit | 183 | 782_nato_ukraine_krim_kiew | | 783 | todesursachen - gehirn - zurückblickt - wahnsinniger - herz | 95 | 783_todesursachen_gehirn_zurückblickt_wahnsinniger | | 784 | zeitzeugen - band - corona_fakten - ansprechpartner - alternativmedien | 60 | 784_zeitzeugen_band_corona_fakten_ansprechpartner | | 785 | nürnberg - straße - montagsspaziergang - frieden - marschieren | 85 | 785_nürnberg_straße_montagsspaziergang_frieden | | 786 | day - 11 - feb - october - clock | 123 | 786_day_11_feb_october | | 787 | schmölen - stillen - stichwaffen - moschee - protest | 128 | 787_schmölen_stillen_stichwaffen_moschee | | 788 | wahrheit - lüge - lügen - blöd - oten | 115 | 788_wahrheit_lüge_lügen_blöd | | 789 | pcr - tests - getestet - test - hebamme | 35 | 789_pcr_tests_getestet_test | | 790 | löhnitz - polizei - lka - weishaupt - steffen | 130 | 790_löhnitz_polizei_lka_weishaupt | | 791 | diktatur - savior - kriegshetzer - staat - pflichtet | 168 | 791_diktatur_savior_kriegshetzer_staat | | 792 | impfnebenwirkungen - apothekenmitarbeiterin - konferenz - lausen - apothekerkammer | 81 | 792_impfnebenwirkungen_apothekenmitarbeiterin_konferenz_lausen | | 793 | 12 - versammlungsleiter - wien - 2021 - demo | 96 | 793_12_versammlungsleiter_wien_2021 | | 794 | mfg - bundesvorstand - pressekonferenz - oö - akzente | 21 | 794_mfg_bundesvorstand_pressekonferenz_oö | | 795 | tesla - wse - grünheide - kubikmeter - gigafactory | 38 | 795_tesla_wse_grünheide_kubikmeter | | 796 | friedliches - zeichen - wien - samstag - lautes | 87 | 796_friedliches_zeichen_wien_samstag | | 797 | pen - zemmour - macron - frankreich - anleitung | 27 | 797_pen_zemmour_macron_frankreich | | 798 | bruck - mur - hymne - lebe - 21 | 110 | 798_bruck_mur_hymne_lebe | | 799 | siga - fairdenker - fairdenken - wien - rebell | 66 | 799_siga_fairdenker_fairdenken_wien | | 800 | solltest - nebenwirkung - streams - interessieren - 711 | 38 | 800_solltest_nebenwirkung_streams_interessieren | | 801 | außerparlamentarischer - videos - yt - untersuchungsausschuss - vernetzt | 39 | 801_außerparlamentarischer_videos_yt_untersuchungsausschuss | | 802 | empfehle - zensurwelle - bundespressekonferenz - zensurfreien - voraus | 88 | 802_empfehle_zensurwelle_bundespressekonferenz_zensurfreien | | 803 | kastenform - poncho - petromax - praktisch - liner | 47 | 803_kastenform_poncho_petromax_praktisch | | 804 | fidor - 4502 - spendenmöglichkeit - de95 - 0020 | 20 | 804_fidor_4502_spendenmöglichkeit_de95 | | 805 | jemen - josilo - saudi - marko - wars | 46 | 805_jemen_josilo_saudi_marko | | 806 | neuinfektionen - mutmaßt - todesfälle - 24 - übersterblichkeit | 29 | 806_neuinfektionen_mutmaßt_todesfälle_24 | | 807 | scholz - olaf - linien - bundeskanzler - rote | 66 | 807_scholz_olaf_linien_bundeskanzler | | 808 | jva - kunst - fuellmich - live - philine | 62 | 808_jva_kunst_fuellmich_live | | 809 | gendern - rbb - gender - gendersprache - wdr | 51 | 809_gendern_rbb_gender_gendersprache | | 810 | odessa - atomkraftwerk - nuklearanlage - ukrenerho - saporozhija | 66 | 810_odessa_atomkraftwerk_nuklearanlage_ukrenerho | | 811 | journalist - gez - ballweg - bizarres - vorbeikommt | 34 | 811_journalist_gez_ballweg_bizarres | | 812 | sumy - korridore - mariupol - feuerpause - waffenruhe | 40 | 812_sumy_korridore_mariupol_feuerpause | | 813 | nessmuk - woodcraft - gehackt - outdoor - grillen | 42 | 813_nessmuk_woodcraft_gehackt_outdoor | | 814 | stellenangebote - geschenke - normal - rottenburg - euren | 87 | 814_stellenangebote_geschenke_normal_rottenburg | | 815 | herman - liebe - popp - zuschrift - leserin | 223 | 815_herman_liebe_popp_zuschrift | | 816 | gettr - tagesreport - fragenrunde - mslive - unterstützern | 34 | 816_gettr_tagesreport_fragenrunde_mslive | | 817 | medizin - anderson - krenn - hausärztliche - dufayet | 41 | 817_medizin_anderson_krenn_hausärztliche | | 818 | gegenuni - aktualisierter - sommersemester - terheş - kalender | 71 | 818_gegenuni_aktualisierter_sommersemester_terheş | | 819 | schwarz - grüne - megakrise - zudecken - tanken | 28 | 819_schwarz_grüne_megakrise_zudecken | | 820 | videos - lesen - compact - bader - box | 33 | 820_videos_lesen_compact_bader | | 821 | steyr - zehntausende - großdemonstrationen - adventsonntag - ried | 52 | 821_steyr_zehntausende_großdemonstrationen_adventsonntag | | 822 | demokratieforscherin - corona - madagaskar - maßnahmen - entschuldigungen | 217 | 822_demokratieforscherin_corona_madagaskar_maßnahmen | | 823 | chelsea - abramowitsch - roman - eingefroren - spielertransfers | 29 | 823_chelsea_abramowitsch_roman_eingefroren | | 824 | pelargoni - pilogrip - tabs - bingen - hildegard | 84 | 824_pelargoni_pilogrip_tabs_bingen | | 825 | schwarzer - wagenknecht - sahra - alice - manifest | 24 | 825_schwarzer_wagenknecht_sahra_alice | | 826 | moral - fügsamen - bon - rebellieren - gemeinschaften | 129 | 826_moral_fügsamen_bon_rebellieren | | 827 | bye - kalcker - biophysiker - behandlung - handhabende | 82 | 827_bye_kalcker_biophysiker_behandlung | | 828 | geschichten - dokumentarfilm - leidvollen - vakzin - dateigrösse | 36 | 828_geschichten_dokumentarfilm_leidvollen_vakzin | | 829 | heldenplatz - wien - 02 - megademo - 23 | 68 | 829_heldenplatz_wien_02_megademo | | 830 | raumluft - platforms - atemwiderstand - expires - formaldehyd | 28 | 830_raumluft_platforms_atemwiderstand_expires | | 831 | zdf - ard - 1958 - sender - 805 | 53 | 831_zdf_ard_1958_sender | | 832 | neutralität - österreich - jänschwalde - partei - bernadette | 58 | 832_neutralität_österreich_jänschwalde_partei | | 833 | kanada - william - tompos - 18h - wetter | 38 | 833_kanada_william_tompos_18h | | 834 | höheres - ekg - sportprogrammen - county - orange | 75 | 834_höheres_ekg_sportprogrammen_county | | 835 | madrid - nürnberg - teusch - josuah - bernabéu | 39 | 835_madrid_nürnberg_teusch_josuah | | 836 | neuestes - zerstörten - boden - spendenmöglichkeiten - mein | 61 | 836_neuestes_zerstörten_boden_spendenmöglichkeiten | | 837 | prof - dr - med - doctor - kameramänner | 31 | 837_prof_dr_med_doctor | | 838 | lipid - spikeprotein - lipide - alc - 0315 | 42 | 838_lipid_spikeprotein_lipide_alc | | 839 | schröder - waters - lugner - sportler - moskau | 70 | 839_schröder_waters_lugner_sportler | | 840 | omicron - dosen - impfstoff - biontech - ceo | 23 | 840_omicron_dosen_impfstoff_biontech | | 841 | raketen - nato - waffen - strela - flugverbotszone | 256 | 841_raketen_nato_waffen_strela | | 842 | korsika - colonna - korsischen - yvan - destille | 59 | 842_korsika_colonna_korsischen_yvan | | 843 | zeolith - entgiftung - darmgase - schimmelpilzgifte - mineralerde | 71 | 843_zeolith_entgiftung_darmgase_schimmelpilzgifte | | 844 | flutkatastrophe - wetteradler - erde - milliarden - schäden | 105 | 844_flutkatastrophe_wetteradler_erde_milliarden | | 845 | freimaurer - borstel - krankenhäuser - 34 - rwi | 57 | 845_freimaurer_borstel_krankenhäuser_34 | | 846 | veritas - project - rosenberg - michaeljaco10 - merchandise | 45 | 846_veritas_project_rosenberg_michaeljaco10 | | 847 | versorgungskrise - fracking - produktion - industrie - stahlwerke | 133 | 847_versorgungskrise_fracking_produktion_industrie | | 848 | lampenöl - autark - ausgießtülle - trichter - lagerbar | 20 | 848_lampenöl_autark_ausgießtülle_trichter | | 849 | windigstem - wasserkocher - brennbarem - nassem - rekordzeit | 30 | 849_windigstem_wasserkocher_brennbarem_nassem | | 850 | fermentation - entweichen - luftdichten - selbsteingelegten - gärventil | 20 | 850_fermentation_entweichen_luftdichten_selbsteingelegten | | 851 | wildgebieten - selbstreinigend - alleskönner - absoluter - stiefel | 27 | 851_wildgebieten_selbstreinigend_alleskönner_absoluter | | 852 | taktische - taschenlampe - wasserdichtigkeit - lampenkopf - hra90 | 37 | 852_taktische_taschenlampe_wasserdichtigkeit_lampenkopf | | 853 | trinkwasserqualität - leitungswasser - maximale - beste - vitales | 51 | 853_trinkwasserqualität_leitungswasser_maximale_beste | </details> ## Training hyperparameters * calculate_probabilities: True * language: multilingual * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: None * seed_topic_list: None * top_n_words: 10 * verbose: True * zeroshot_min_similarity: 0.7 * zeroshot_topic_list: None ## Framework versions * Numpy: 1.25.2 * HDBSCAN: 0.8.33 * UMAP: 0.5.6 * Pandas: 1.5.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.6.1 * Transformers: 4.38.2 * Numba: 0.58.1 * Plotly: 5.15.0 * Python: 3.10.12
{"library_name": "bertopic", "tags": ["bertopic"], "pipeline_tag": "text-classification"}
RolMax/impf_ukrain_postcov_all_sns_topics_umap_lok_hdbscan_lok_ctfidf_seed_13_prob
null
[ "bertopic", "text-classification", "region:us" ]
null
2024-04-12T14:58:52+00:00
[]
[]
TAGS #bertopic #text-classification #region-us
impf\_ukrain\_postcov\_all\_sns\_topics\_umap\_lok\_hdbscan\_lok\_ctfidf\_seed\_13\_prob ======================================================================================== This is a BERTopic model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. Usage ----- To use this model, please install BERTopic: You can use the model as follows: Topic overview -------------- * Number of topics: 854 * Number of training documents: 91393 Click here for an overview of all topics. Training hyperparameters ------------------------ * calculate\_probabilities: True * language: multilingual * low\_memory: False * min\_topic\_size: 10 * n\_gram\_range: (1, 1) * nr\_topics: None * seed\_topic\_list: None * top\_n\_words: 10 * verbose: True * zeroshot\_min\_similarity: 0.7 * zeroshot\_topic\_list: None Framework versions ------------------ * Numpy: 1.25.2 * HDBSCAN: 0.8.33 * UMAP: 0.5.6 * Pandas: 1.5.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.6.1 * Transformers: 4.38.2 * Numba: 0.58.1 * Plotly: 5.15.0 * Python: 3.10.12
[]
[ "TAGS\n#bertopic #text-classification #region-us \n" ]
text-generation
transformers
<img align="right" src="https://huggingface.co/Tweeties/tweety-tatar-base-7b-2024-v1/resolve/main/TweetyTatar.png?download=true" alt="Tweety-Tatar-7B: A Tatar Large Language Model" width="20%"> # Tweety Tatar / Hydra-MT 7b / 2024-v1 ## Model description This model is our Hydra LLM for the [Tatar language](https://en.wikipedia.org/wiki/Tatar_language), converted from the [TowerInstruct-7b-v0.1](https://huggingface.co/Unbabel/TowerInstruct-7B-v0.1) model trained by Unbabel, via [our Hydra-Base model](https://huggingface.co/Tweeties/tweety-tatar-hydra-base-7b-2024-v1). Hydra LLMs are trans-tokenized language models finetuned to produce output in a particular language, while accepting input encoded using either their own tokenizer, the one of their base model, or a mix of both. This enables them to receive code-switched input in both their native language and other languages, which is an ideal setup for translation tasks, or retrieval-augmented generation (RAG) in cross-lingual scenarios (see [our Hydra-Base model](https://huggingface.co/Tweeties/tweety-tatar-hydra-base-7b-2024-v1)). - **Developed by:** [François Remy](https://huggingface.co/FremyCompany) (UGent), [Alfiya Khabibullina](https://huggingface.co/justalphie) (BeCode), [et al.](#citation) - **Funded by:** IDLab / GPULab - **Model type:** Foundation model using the mistral architecture - **Language(s) (NLP):** Tatar - **License:** Creative Commons Attribution Non Commercial 4.0 ## In-scope usage This model can be used as-is or finetuned into a machine translation system from one of the 10 languages supported by TowerInstruct into the Tatar language. This list of languages nobably includes English and Russian. The model performs best when translating sentences or small paragraphs, and is not suited for document translation tasks. This model should not be used in the reverse direction, to translate Tatar into English. While the system is finetuned for translation, enabling beam search provides better results. Take note of the non-commercial license imposed by Unbabel on the base model, which also applies to this model. ## Usage instructions Using this model usually requires building the prompts by mixing tokens from two tokenizers, the original TowerInstruct tokenizer for input in the source language, and the new Tatar tokenizer for the prompt and output, as described in the examples below: ```py import re import torch import torch.nn as nn import transformers MODEL_NAME = "Tweeties/tweety-tatar-hydra-mt-7b-2024-v1" MAIN_TOKENIZER_NAME = "Tweeties/tweety-tatar-hydra-mt-7b-2024-v1" UTIL_TOKENIZER_NAME = "Unbabel/TowerInstruct-7B-v0.1" model = transformers.AutoModelForCausalLM.from_pretrained(MODEL_NAME, trust_remote_code=True) main_tokenizer = transformers.LlamaTokenizerFast.from_pretrained(MAIN_TOKENIZER_NAME) util_tokenizer = transformers.LlamaTokenizerFast.from_pretrained(UTIL_TOKENIZER_NAME) main_tokenizer_len = len(main_tokenizer) ``` ### Machine Translation ```py def translate_english_text(english_text: str) -> str: # craft the input input_ids = torch.concat([ main_tokenizer.encode(f"Түбәндәге текстны инглиз теленнән татар теленә тәрҗемә итегез:\n", return_tensors='pt'), util_tokenizer.encode(f"{english_text}", add_special_tokens=False, return_tensors='pt') + torch.tensor([main_tokenizer_len]), main_tokenizer.encode(f"\nТекстны татар теленә тәрҗемә итү:\n", add_special_tokens=False, return_tensors='pt') ], axis=1) # prevent the model from repeating the prompt prompt_starts = [ main_tokenizer.encode("Түбәндәге"), main_tokenizer.encode("\nТүбәндәге")[2:], main_tokenizer.encode("Текстны"), main_tokenizer.encode("\nТекстны")[2:] ] # genereate the output model_inputs = {'input_ids':input_ids.to(model.device)} model_outputs = model.generate( **model_inputs, max_new_tokens=128, num_beams=8, no_repeat_ngram_size=6, early_stopping=False, pad_token_id=main_tokenizer.eos_token_id, eos_token_id=main_tokenizer.convert_tokens_to_ids(['<0x0A>','</s>']), bad_words_ids=prompt_starts ) # decode the output return (main_tokenizer.decode(model_outputs[0][input_ids.shape[1]:])) translate_english_text("The city of Paris is very pretty.") # Париж шәһәре бик матур. ``` ## Citation If you use this model, please cite our work as: ``` @article{tweeties2024, title = {Trans-Tokenization and Cross-lingual Vocabulary Transfers: Language Adaptation of LLMs for Low-Resource NLP}, author = {François Remy and Pieter Delobelle and Hayastan Avetisyan and Alfiya Khabibullina and Miryam de Lhoneux and Thomas Demeester}, url = {https://raw.githubusercontent.com/LAGoM-NLP/transtokenizer/paper/Trans-Tokenization.pdf}, year = {2024}, note = {Under review at COLM 2024} } ```
{"language": ["tt", "en", "de", "fr", "zh", "pt", "nl", "ru", "ko", "it", "es"], "license": "cc-by-nc-4.0", "tags": ["tweety"], "datasets": ["oscar-corpus/OSCAR-2301"], "base_model": "Unbabel/TowerInstruct-7B-v0.1"}
Tweeties/tweety-tatar-hydra-mt-7b-2024-v1
null
[ "transformers", "safetensors", "llama_hydra", "text-generation", "tweety", "custom_code", "tt", "en", "de", "fr", "zh", "pt", "nl", "ru", "ko", "it", "es", "dataset:oscar-corpus/OSCAR-2301", "base_model:Unbabel/TowerInstruct-7B-v0.1", "license:cc-by-nc-4.0", "autotrain_compatible", "region:us" ]
null
2024-04-12T14:59:30+00:00
[]
[ "tt", "en", "de", "fr", "zh", "pt", "nl", "ru", "ko", "it", "es" ]
TAGS #transformers #safetensors #llama_hydra #text-generation #tweety #custom_code #tt #en #de #fr #zh #pt #nl #ru #ko #it #es #dataset-oscar-corpus/OSCAR-2301 #base_model-Unbabel/TowerInstruct-7B-v0.1 #license-cc-by-nc-4.0 #autotrain_compatible #region-us
<img align="right" src="URL alt="Tweety-Tatar-7B: A Tatar Large Language Model" width="20%"> # Tweety Tatar / Hydra-MT 7b / 2024-v1 ## Model description This model is our Hydra LLM for the Tatar language, converted from the TowerInstruct-7b-v0.1 model trained by Unbabel, via our Hydra-Base model. Hydra LLMs are trans-tokenized language models finetuned to produce output in a particular language, while accepting input encoded using either their own tokenizer, the one of their base model, or a mix of both. This enables them to receive code-switched input in both their native language and other languages, which is an ideal setup for translation tasks, or retrieval-augmented generation (RAG) in cross-lingual scenarios (see our Hydra-Base model). - Developed by: François Remy (UGent), Alfiya Khabibullina (BeCode), et al. - Funded by: IDLab / GPULab - Model type: Foundation model using the mistral architecture - Language(s) (NLP): Tatar - License: Creative Commons Attribution Non Commercial 4.0 ## In-scope usage This model can be used as-is or finetuned into a machine translation system from one of the 10 languages supported by TowerInstruct into the Tatar language. This list of languages nobably includes English and Russian. The model performs best when translating sentences or small paragraphs, and is not suited for document translation tasks. This model should not be used in the reverse direction, to translate Tatar into English. While the system is finetuned for translation, enabling beam search provides better results. Take note of the non-commercial license imposed by Unbabel on the base model, which also applies to this model. ## Usage instructions Using this model usually requires building the prompts by mixing tokens from two tokenizers, the original TowerInstruct tokenizer for input in the source language, and the new Tatar tokenizer for the prompt and output, as described in the examples below: ### Machine Translation If you use this model, please cite our work as:
[ "# Tweety Tatar / Hydra-MT 7b / 2024-v1", "## Model description\nThis model is our Hydra LLM for the Tatar language, converted from the TowerInstruct-7b-v0.1 model trained by Unbabel, via our Hydra-Base model. \nHydra LLMs are trans-tokenized language models finetuned to produce output in a particular language, while accepting input encoded using either their own tokenizer, the one of their base model, or a mix of both. \nThis enables them to receive code-switched input in both their native language and other languages, which is an ideal setup for translation tasks, or retrieval-augmented generation (RAG) in cross-lingual scenarios (see our Hydra-Base model).\n\n- Developed by: François Remy (UGent), Alfiya Khabibullina (BeCode), et al.\n- Funded by: IDLab / GPULab\n- Model type: Foundation model using the mistral architecture\n- Language(s) (NLP): Tatar\n- License: Creative Commons Attribution Non Commercial 4.0", "## In-scope usage\nThis model can be used as-is or finetuned into a machine translation system from one of the 10 languages supported by TowerInstruct into the Tatar language. \nThis list of languages nobably includes English and Russian. \nThe model performs best when translating sentences or small paragraphs, and is not suited for document translation tasks. \nThis model should not be used in the reverse direction, to translate Tatar into English. \nWhile the system is finetuned for translation, enabling beam search provides better results. \nTake note of the non-commercial license imposed by Unbabel on the base model, which also applies to this model.", "## Usage instructions\nUsing this model usually requires building the prompts by mixing tokens from two tokenizers, the original TowerInstruct tokenizer for input in the source language, and the new Tatar tokenizer for the prompt and output, as described in the examples below:", "### Machine Translation\n\n\n\n\nIf you use this model, please cite our work as:" ]
[ "TAGS\n#transformers #safetensors #llama_hydra #text-generation #tweety #custom_code #tt #en #de #fr #zh #pt #nl #ru #ko #it #es #dataset-oscar-corpus/OSCAR-2301 #base_model-Unbabel/TowerInstruct-7B-v0.1 #license-cc-by-nc-4.0 #autotrain_compatible #region-us \n", "# Tweety Tatar / Hydra-MT 7b / 2024-v1", "## Model description\nThis model is our Hydra LLM for the Tatar language, converted from the TowerInstruct-7b-v0.1 model trained by Unbabel, via our Hydra-Base model. \nHydra LLMs are trans-tokenized language models finetuned to produce output in a particular language, while accepting input encoded using either their own tokenizer, the one of their base model, or a mix of both. \nThis enables them to receive code-switched input in both their native language and other languages, which is an ideal setup for translation tasks, or retrieval-augmented generation (RAG) in cross-lingual scenarios (see our Hydra-Base model).\n\n- Developed by: François Remy (UGent), Alfiya Khabibullina (BeCode), et al.\n- Funded by: IDLab / GPULab\n- Model type: Foundation model using the mistral architecture\n- Language(s) (NLP): Tatar\n- License: Creative Commons Attribution Non Commercial 4.0", "## In-scope usage\nThis model can be used as-is or finetuned into a machine translation system from one of the 10 languages supported by TowerInstruct into the Tatar language. \nThis list of languages nobably includes English and Russian. \nThe model performs best when translating sentences or small paragraphs, and is not suited for document translation tasks. \nThis model should not be used in the reverse direction, to translate Tatar into English. \nWhile the system is finetuned for translation, enabling beam search provides better results. \nTake note of the non-commercial license imposed by Unbabel on the base model, which also applies to this model.", "## Usage instructions\nUsing this model usually requires building the prompts by mixing tokens from two tokenizers, the original TowerInstruct tokenizer for input in the source language, and the new Tatar tokenizer for the prompt and output, as described in the examples below:", "### Machine Translation\n\n\n\n\nIf you use this model, please cite our work as:" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shallow6414/nly17bl
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:00:51+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5933 - Accuracy: 0.8596 - F1: 0.8420 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6828 | 1.0 | 182 | 0.7011 | 0.6976 | 0.4393 | | 0.4415 | 2.0 | 364 | 0.4868 | 0.8266 | 0.7933 | | 0.4762 | 3.0 | 546 | 0.5500 | 0.8163 | 0.7798 | | 0.2522 | 4.0 | 728 | 0.5855 | 0.8369 | 0.8139 | | 0.1986 | 5.0 | 910 | 0.5933 | 0.8596 | 0.8420 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "microsoft/deberta-v3-base", "model-index": [{"name": "results", "results": []}]}
kbaumgartner/DeBERTa_Finetuned_Financial_News
null
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-12T15:03:33+00:00
[]
[]
TAGS #transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-base #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
results ======= This model is a fine-tuned version of microsoft/deberta-v3-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.5933 * Accuracy: 0.8596 * F1: 0.8420 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 64 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #deberta-v2 #text-classification #generated_from_trainer #base_model-microsoft/deberta-v3-base #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 64\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-to-speech
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # speecht5_finetuned_voxpopuli_accented_en_de This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the Voxpopuli dataset. It achieves the following results on the evaluation set: - Loss: 0.4991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.4793 | 108.11 | 1000 | 0.4991 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["en"], "license": "mit", "tags": ["generated_from_trainer"], "datasets": ["facebook/voxpopuli"], "base_model": "microsoft/speecht5_tts", "pipeline_tag": "text-to-speech", "model-index": [{"name": "speecht5_finetuned_voxpopuli_accented_en_de", "results": []}]}
constantinSch/speecht5_finetuned_voxpopuli_accented_en_de
null
[ "transformers", "tensorboard", "safetensors", "speecht5", "text-to-audio", "generated_from_trainer", "text-to-speech", "en", "dataset:facebook/voxpopuli", "base_model:microsoft/speecht5_tts", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:04:31+00:00
[]
[ "en" ]
TAGS #transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #text-to-speech #en #dataset-facebook/voxpopuli #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us
speecht5\_finetuned\_voxpopuli\_accented\_en\_de ================================================ This model is a fine-tuned version of microsoft/speecht5\_tts on the Voxpopuli dataset. It achieves the following results on the evaluation set: * Loss: 0.4991 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 4 * eval\_batch\_size: 2 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 1000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 1000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #text-to-speech #en #dataset-facebook/voxpopuli #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 1000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo deepseek-ai/deepseek-coder-6.7b-instruct installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/deepseek-ai-deepseek-coder-6.7b-instruct-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-6.7b-instruct") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model deepseek-ai/deepseek-coder-6.7b-instruct before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/deepseek-ai-deepseek-coder-6.7b-instruct-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-12T15:07:14+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo deepseek-ai/deepseek-coder-6.7b-instruct installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model deepseek-ai/deepseek-coder-6.7b-instruct before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo deepseek-ai/deepseek-coder-6.7b-instruct installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model deepseek-ai/deepseek-coder-6.7b-instruct before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo deepseek-ai/deepseek-coder-6.7b-instruct installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model deepseek-ai/deepseek-coder-6.7b-instruct before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo 42dot/42dot_LLM-SFT-1.3B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/42dot-42dot_LLM-SFT-1.3B-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("42dot/42dot_LLM-SFT-1.3B") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model 42dot/42dot_LLM-SFT-1.3B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/42dot-42dot_LLM-SFT-1.3B-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-12T15:07:41+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo 42dot/42dot_LLM-SFT-1.3B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model 42dot/42dot_LLM-SFT-1.3B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo 42dot/42dot_LLM-SFT-1.3B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model 42dot/42dot_LLM-SFT-1.3B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo 42dot/42dot_LLM-SFT-1.3B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model 42dot/42dot_LLM-SFT-1.3B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo wangrongsheng/MiniGPT-4-LLaMA-7B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/wangrongsheng-MiniGPT-4-LLaMA-7B-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("wangrongsheng/MiniGPT-4-LLaMA-7B") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model wangrongsheng/MiniGPT-4-LLaMA-7B before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/wangrongsheng-MiniGPT-4-LLaMA-7B-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-12T15:08:03+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo wangrongsheng/MiniGPT-4-LLaMA-7B installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model wangrongsheng/MiniGPT-4-LLaMA-7B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo wangrongsheng/MiniGPT-4-LLaMA-7B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model wangrongsheng/MiniGPT-4-LLaMA-7B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo wangrongsheng/MiniGPT-4-LLaMA-7B installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model wangrongsheng/MiniGPT-4-LLaMA-7B before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="BecTome/Taxi", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.50 +/- 2.76", "name": "mean_reward", "verified": false}]}]}]}
BecTome/Taxi
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-12T15:09:58+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo openchat/openchat_3.5 installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/openchat-openchat_3.5-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("openchat/openchat_3.5") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model openchat/openchat_3.5 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/openchat-openchat_3.5-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "mistral", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-12T15:10:58+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo openchat/openchat_3.5 installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model openchat/openchat_3.5 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo openchat/openchat_3.5 installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model openchat/openchat_3.5 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo openchat/openchat_3.5 installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model openchat/openchat_3.5 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biogpt This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1701 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.2968 | 1.0 | 680 | 0.1899 | | 0.1873 | 2.0 | 1360 | 0.1701 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "model-index": [{"name": "biogpt", "results": []}]}
AnkushJindal28/biogpt
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T15:13:20+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
biogpt ====== This model is a fine-tuned version of [](URL on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1701 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
AL12dw/Lyria2
null
[ "transformers", "safetensors", "phi", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T15:14:05+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #phi #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/mpasila/Mistral-7B-Erebus-v3-Instruct-32k <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF/resolve/main/Mistral-7B-Erebus-v3-Instruct-32k.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "mpasila/Mistral-7B-Erebus-v3-Instruct-32k", "quantized_by": "mradermacher"}
mradermacher/Mistral-7B-Erebus-v3-Instruct-32k-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:mpasila/Mistral-7B-Erebus-v3-Instruct-32k", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:16:22+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mergekit #merge #en #base_model-mpasila/Mistral-7B-Erebus-v3-Instruct-32k #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #mergekit #merge #en #base_model-mpasila/Mistral-7B-Erebus-v3-Instruct-32k #endpoints_compatible #region-us \n" ]
null
adapter-transformers
# Adapter `BigTMiami/A1_adapter_seq_bn_pretrain_P_5` for roberta-base An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/amazon_split_25M_reviews_20_percent_condensed](https://huggingface.co/datasets/BigTMiami/amazon_split_25M_reviews_20_percent_condensed/) dataset and includes a prediction head for masked lm. This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. ## Usage First, install `adapters`: ``` pip install -U adapters ``` Now, the adapter can be loaded and activated like this: ```python from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("roberta-base") adapter_name = model.load_adapter("BigTMiami/A1_adapter_seq_bn_pretrain_P_5", source="hf", set_active=True) ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
{"tags": ["roberta", "adapter-transformers"], "datasets": ["BigTMiami/amazon_split_25M_reviews_20_percent_condensed"]}
BigTMiami/A1_adapter_seq_bn_pretrain_P_5
null
[ "adapter-transformers", "roberta", "dataset:BigTMiami/amazon_split_25M_reviews_20_percent_condensed", "region:us" ]
null
2024-04-12T15:16:38+00:00
[]
[]
TAGS #adapter-transformers #roberta #dataset-BigTMiami/amazon_split_25M_reviews_20_percent_condensed #region-us
# Adapter 'BigTMiami/A1_adapter_seq_bn_pretrain_P_5' for roberta-base An adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm. This adapter was created for usage with the Adapters library. ## Usage First, install 'adapters': Now, the adapter can be loaded and activated like this: ## Architecture & Training ## Evaluation results
[ "# Adapter 'BigTMiami/A1_adapter_seq_bn_pretrain_P_5' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
[ "TAGS\n#adapter-transformers #roberta #dataset-BigTMiami/amazon_split_25M_reviews_20_percent_condensed #region-us \n", "# Adapter 'BigTMiami/A1_adapter_seq_bn_pretrain_P_5' for roberta-base\n\nAn adapter for the 'roberta-base' model that was trained on the BigTMiami/amazon_split_25M_reviews_20_percent_condensed dataset and includes a prediction head for masked lm.\n\nThis adapter was created for usage with the Adapters library.", "## Usage\n\nFirst, install 'adapters':\n\n\n\nNow, the adapter can be loaded and activated like this:", "## Architecture & Training", "## Evaluation results" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # project-2 This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2554 - F1: 0.7160 - Roc Auc: 0.8086 - Accuracy: 0.6646 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|:--------:| | 0.2497 | 1.0 | 73895 | 0.2554 | 0.7160 | 0.8086 | 0.6646 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["f1", "accuracy"], "base_model": "vinai/phobert-base", "model-index": [{"name": "project-2", "results": []}]}
sonchuate/project-2
null
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:vinai/phobert-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:18:55+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-vinai/phobert-base #autotrain_compatible #endpoints_compatible #region-us
project-2 ========= This model is a fine-tuned version of vinai/phobert-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.2554 * F1: 0.7160 * Roc Auc: 0.8086 * Accuracy: 0.6646 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.1.2 * Datasets 2.1.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.1.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-vinai/phobert-base #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.1.2\n* Datasets 2.1.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/TinyLlama-TinyLlama-1.1B-intermediate-step-715k-1.5T-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/TinyLlama-TinyLlama-1.1B-intermediate-step-715k-1.5T-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-12T15:19:38+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-intermediate-step-715k-1.5T before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
text-generation
transformers
<p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/YXqUXFjX8uIJT-mdOnM1h.png" alt="" style="width: 95%; max-height: 750px;"> </p> ## Metrics. <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/g7YbkdYm6G4dUdWGRa0nG.png" alt="" style="width: 95%; max-height: 750px;"> </p> <p align="center"> <img src="https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/CN4dNjrvbvwOZqjrCWT7_.png" alt="" style="width: 95%; max-height: 750px;"> </p> ``` TrainOutput(global_step=1390, training_loss=1.0502444919064748, metrics={ 'train_runtime': 22700.8355, 'train_samples_per_second': 2.449, 'train_steps_per_second': 0.061, 'total_flos': 1.2395973405265306e+18, 'train_loss': 1.0502444919064748, 'epoch': 4.05 }) ``` ## Take dataset. ``` OpenAssistant/oasst2 ``` ## Dataset format gemma fine tune. ``` NickyNicky/oasst2_clusters ``` ## colab examples. ``` https://colab.research.google.com/drive/16qS7NMSu20LzcwvYCrBGVI7rd9Hr-vpN?usp=sharing ```
{"language": ["bg", "ca", "cs", "da", "de", "en", "es", "fr", "hr", "hu", "it", "nl", "pl", "pt", "ro", "ru", "sl", "sr", "sv", "uk"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["NickyNicky/oasst2_clusters", "OpenAssistant/oasst2"], "model": ["google/gemma-1.1-2b-it"], "widget": [{"text": "<bos><start_of_turn>system\nYou are a helpful AI assistant.<end_of_turn>\n<start_of_turn>user\n{question}<end_of_turn>\n<start_of_turn>model"}]}
NickyNicky/gemma-1.1-2b-it_oasst_format_chatML_unsloth_V1
null
[ "transformers", "safetensors", "gemma", "text-generation", "conversational", "bg", "ca", "cs", "da", "de", "en", "es", "fr", "hr", "hu", "it", "nl", "pl", "pt", "ro", "ru", "sl", "sr", "sv", "uk", "dataset:NickyNicky/oasst2_clusters", "dataset:OpenAssistant/oasst2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T15:21:25+00:00
[]
[ "bg", "ca", "cs", "da", "de", "en", "es", "fr", "hr", "hu", "it", "nl", "pl", "pt", "ro", "ru", "sl", "sr", "sv", "uk" ]
TAGS #transformers #safetensors #gemma #text-generation #conversational #bg #ca #cs #da #de #en #es #fr #hr #hu #it #nl #pl #pt #ro #ru #sl #sr #sv #uk #dataset-NickyNicky/oasst2_clusters #dataset-OpenAssistant/oasst2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<p align="center"> <img src="URL alt="" style="width: 95%; max-height: 750px;"> </p> ## Metrics. <p align="center"> <img src="URL alt="" style="width: 95%; max-height: 750px;"> </p> <p align="center"> <img src="URL alt="" style="width: 95%; max-height: 750px;"> </p> ## Take dataset. ## Dataset format gemma fine tune. ## colab examples.
[ "## Metrics.\n\n<p align=\"center\">\n <img src=\"URL alt=\"\" style=\"width: 95%; max-height: 750px;\">\n</p>\n\n<p align=\"center\">\n <img src=\"URL alt=\"\" style=\"width: 95%; max-height: 750px;\">\n</p>", "## Take dataset.", "## Dataset format gemma fine tune.", "## colab examples." ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #conversational #bg #ca #cs #da #de #en #es #fr #hr #hu #it #nl #pl #pt #ro #ru #sl #sr #sv #uk #dataset-NickyNicky/oasst2_clusters #dataset-OpenAssistant/oasst2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "## Metrics.\n\n<p align=\"center\">\n <img src=\"URL alt=\"\" style=\"width: 95%; max-height: 750px;\">\n</p>\n\n<p align=\"center\">\n <img src=\"URL alt=\"\" style=\"width: 95%; max-height: 750px;\">\n</p>", "## Take dataset.", "## Dataset format gemma fine tune.", "## colab examples." ]
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # PolizzeDonut-ChangeRequest-imm5epochs-Expand2 This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "naver-clova-ix/donut-base", "model-index": [{"name": "PolizzeDonut-ChangeRequest-imm5epochs-Expand2", "results": []}]}
tedad09/PolizzeDonut-ChangeRequest-imm5epochs-Expand2
null
[ "transformers", "tensorboard", "safetensors", "vision-encoder-decoder", "generated_from_trainer", "dataset:imagefolder", "base_model:naver-clova-ix/donut-base", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:23:21+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us
# PolizzeDonut-ChangeRequest-imm5epochs-Expand2 This model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# PolizzeDonut-ChangeRequest-imm5epochs-Expand2\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vision-encoder-decoder #generated_from_trainer #dataset-imagefolder #base_model-naver-clova-ix/donut-base #license-mit #endpoints_compatible #region-us \n", "# PolizzeDonut-ChangeRequest-imm5epochs-Expand2\n\nThis model is a fine-tuned version of naver-clova-ix/donut-base on the imagefolder dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.2+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 340 | 0.1963 | | 0.2426 | 2.0 | 680 | 0.1614 | | 0.1198 | 3.0 | 1020 | 0.1458 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/biogpt", "model-index": [{"name": "results", "results": []}]}
Irisissocute/results
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "token-classification", "generated_from_trainer", "base_model:microsoft/biogpt", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T15:26:15+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
results ======= This model is a fine-tuned version of microsoft/biogpt on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1458 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
This model is under training ! requires data !! : and time ! (day by day will add 1hr of dolphin Coder/White rabbit (generally and orca/dolphin ) until the datasets are fully overfit! before applying other datasets) Getting better ! under training ( the base model become the updated version , this model is always in training and changing if a problem happens he can revert to his previous base self: ) # Uploaded model - **Developed by:** LeroyDyer - **License:** apache-2.0 - **Finetuned from model :** Mixtral_AI_MiniTron Recently upated with a few more datasets::::: (Still not ready to talkabout !) (converging quickly on datasets) This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "datasets": ["abacusai/ARC_DPO_FewShot", "abacusai/MetaMathFewshot", "abacusai/HellaSwag_DPO_FewShot", "yahma/alpaca-cleaned", "iamtarun/python_code_instructions_18k_alpaca", "gate369/Alpaca-Star"], "base_model": "LeroyDyer/Mixtral_AI_MiniTron_SFT"}
LeroyDyer/Mixtral_AI_MiniTron_SFT_3.75b
null
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "dataset:abacusai/ARC_DPO_FewShot", "dataset:abacusai/MetaMathFewshot", "dataset:abacusai/HellaSwag_DPO_FewShot", "dataset:yahma/alpaca-cleaned", "dataset:iamtarun/python_code_instructions_18k_alpaca", "dataset:gate369/Alpaca-Star", "base_model:LeroyDyer/Mixtral_AI_MiniTron_SFT", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:28:04+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #dataset-abacusai/ARC_DPO_FewShot #dataset-abacusai/MetaMathFewshot #dataset-abacusai/HellaSwag_DPO_FewShot #dataset-yahma/alpaca-cleaned #dataset-iamtarun/python_code_instructions_18k_alpaca #dataset-gate369/Alpaca-Star #base_model-LeroyDyer/Mixtral_AI_MiniTron_SFT #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
This model is under training ! requires data !! : and time ! (day by day will add 1hr of dolphin Coder/White rabbit (generally and orca/dolphin ) until the datasets are fully overfit! before applying other datasets) Getting better ! under training ( the base model become the updated version , this model is always in training and changing if a problem happens he can revert to his previous base self: ) # Uploaded model - Developed by: LeroyDyer - License: apache-2.0 - Finetuned from model : Mixtral_AI_MiniTron Recently upated with a few more datasets::::: (Still not ready to talkabout !) (converging quickly on datasets) This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : Mixtral_AI_MiniTron\n\nRecently upated with a few more datasets::::: (Still not ready to talkabout !) (converging quickly on datasets)\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #conversational #en #dataset-abacusai/ARC_DPO_FewShot #dataset-abacusai/MetaMathFewshot #dataset-abacusai/HellaSwag_DPO_FewShot #dataset-yahma/alpaca-cleaned #dataset-iamtarun/python_code_instructions_18k_alpaca #dataset-gate369/Alpaca-Star #base_model-LeroyDyer/Mixtral_AI_MiniTron_SFT #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : Mixtral_AI_MiniTron\n\nRecently upated with a few more datasets::::: (Still not ready to talkabout !) (converging quickly on datasets)\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
question-answering
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Tsedeniya/t5-finetuned-squad
null
[ "transformers", "tensorboard", "safetensors", "t5", "question-answering", "arxiv:1910.09700", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T15:28:25+00:00
[ "1910.09700" ]
[]
TAGS #transformers #tensorboard #safetensors #t5 #question-answering #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #question-answering #arxiv-1910.09700 #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
mooo16/Gemme-trained
null
[ "transformers", "safetensors", "gemma", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-12T15:29:08+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gemma #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gemma #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tomaszki/stablelm-29
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:29:18+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jhovany/Beto_Clasificiar_Tweets_Mexicanos_Homomex_94-6
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:29:21+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/TinyLlama-TinyLlama-1.1B-intermediate-step-955k-token-2T-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/TinyLlama-TinyLlama-1.1B-intermediate-step-955k-token-2T-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-12T15:31:35+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-50-finetuned-en-es-2 This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "facebook/mbart-large-50", "model-index": [{"name": "mbart-large-50-finetuned-en-es-2", "results": []}]}
Gilito21/mbart-large-50-finetuned-en-es-2
null
[ "transformers", "tensorboard", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "base_model:facebook/mbart-large-50", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:32:02+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #mbart #text2text-generation #generated_from_trainer #base_model-facebook/mbart-large-50 #license-mit #autotrain_compatible #endpoints_compatible #region-us
# mbart-large-50-finetuned-en-es-2 This model is a fine-tuned version of facebook/mbart-large-50 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results ### Framework versions - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# mbart-large-50-finetuned-en-es-2\n\nThis model is a fine-tuned version of facebook/mbart-large-50 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #mbart #text2text-generation #generated_from_trainer #base_model-facebook/mbart-large-50 #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# mbart-large-50-finetuned-en-es-2\n\nThis model is a fine-tuned version of facebook/mbart-large-50 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 2\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3.0", "### Training results", "### Framework versions\n\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
matenice/bert-base-japanese-v3-marc-ja
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:35:08+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
zero-shot-classification
null
# GLiNER-large-v2.1-bird ![gliner bird logo](gliner-bird-logo.png) Welcome to the repository for the `gliner-large-v2.1-bird` model, a fine-tuned version of the GLiNER `gliner_large-v2.1` targeting specific types of data related to the descriptions of birds. This model enhances the capability to recognize detailed aspects of avian life, particularly focusing on their nesting and dietary habits. ## Model Description The `gliner-large-v2.1-bird` model is fine-tuned on synthetic data specifically created to capture the nuances of avian dietary and nesting behaviors. The original model, available at [GLiNER GitHub](https://github.com/urchade/GLiNER), has been adapted to better understand texts describing birds, by recognizing and categorizing specific labels related to food sources and nesting locations. ## Labels This model recognizes the following labels, with each category described below: - **GENERIC_PLANT_FOOD**: Refers to generic categories of plant-based food sources mentioned in bird descriptions. - **GENERIC_ANIMAL_FOOD**: Includes broader categories of animal-based food sources. - **PLANT_FOOD**: Specific types of plant food. - **SPECIFIC_ANIMAL_FOOD**: Specific types of animal-based food sources mentioned, such as insects and types of fish. - **LOCATION_NEST**: Describes the typical nesting locations of birds. - **ITEM_NEST**: Items used by birds to construct their nests. ## Sample Data Here is a sample from the dataset used for training the model: ```json { "text": "The Alpine Swift primarily consumes flying insects such as wasps, bees, and flies. This bird captures its prey mid-air while swiftly flying through the alpine skies. It nests in high, rocky mountain crevices where it uses feathers and small sticks to construct a simple yet secure nesting environment.", "generic_plant_food": [], "generic_animal_food": ["flying insects"], "plant_food": [], "specific_animal_food": [ "wasps", "bees", "flies" ], "location_nest": [ "rocky mountain crevices" ], "item_nest": [ "feathers", "small sticks" ] } ``` ## Usage This model can be used with the GLiNER library which should be installed via pip as shown below: ```bash !pip install gliner ``` After installing the GLiNER library, you can use it to predict entities in texts related to bird descriptions by following these steps: ```python from gliner import GLiNER # Initialize GLiNER with the fine-tuned model model = GLiNER.from_pretrained("wjbmattingly/gliner-large-v2.1-bird") # Sample text for entity prediction text = """ The Alpine Swift primarily consumes flying insects such as wasps, bees, and flies. This bird captures its prey mid-air while swiftly flying through the alpine skies. It nests in high, rocky mountain crevices where it uses feathers and small sticks to construct a simple yet secure nesting environment. """ # Labels for entity prediction (ensure to use the labels specific to your model's training) labels = ["GENERIC_PLANT_FOOD", "GENERIC_ANIMAL_FOOD", "PLANT_FOOD", "SPECIFIC_ANIMAL_FOOD", "LOCATION_NEST", "ITEM_NEST"] # Perform entity prediction entities = model.predict_entities(text, labels, threshold=0.5) # Display predicted entities and their labels for entity in entities: print(entity["text"], "=>", entity["label"]) ```
{"language": ["en"], "license": "apache-2.0", "pipeline_tag": "zero-shot-classification"}
wjbmattingly/gliner-large-v2.1-bird
null
[ "pytorch", "zero-shot-classification", "en", "license:apache-2.0", "region:us" ]
null
2024-04-12T15:35:27+00:00
[]
[ "en" ]
TAGS #pytorch #zero-shot-classification #en #license-apache-2.0 #region-us
# GLiNER-large-v2.1-bird !gliner bird logo Welcome to the repository for the 'gliner-large-v2.1-bird' model, a fine-tuned version of the GLiNER 'gliner_large-v2.1' targeting specific types of data related to the descriptions of birds. This model enhances the capability to recognize detailed aspects of avian life, particularly focusing on their nesting and dietary habits. ## Model Description The 'gliner-large-v2.1-bird' model is fine-tuned on synthetic data specifically created to capture the nuances of avian dietary and nesting behaviors. The original model, available at GLiNER GitHub, has been adapted to better understand texts describing birds, by recognizing and categorizing specific labels related to food sources and nesting locations. ## Labels This model recognizes the following labels, with each category described below: - GENERIC_PLANT_FOOD: Refers to generic categories of plant-based food sources mentioned in bird descriptions. - GENERIC_ANIMAL_FOOD: Includes broader categories of animal-based food sources. - PLANT_FOOD: Specific types of plant food. - SPECIFIC_ANIMAL_FOOD: Specific types of animal-based food sources mentioned, such as insects and types of fish. - LOCATION_NEST: Describes the typical nesting locations of birds. - ITEM_NEST: Items used by birds to construct their nests. ## Sample Data Here is a sample from the dataset used for training the model: ## Usage This model can be used with the GLiNER library which should be installed via pip as shown below: After installing the GLiNER library, you can use it to predict entities in texts related to bird descriptions by following these steps:
[ "# GLiNER-large-v2.1-bird\n\n!gliner bird logo\n\nWelcome to the repository for the 'gliner-large-v2.1-bird' model, a fine-tuned version of the GLiNER 'gliner_large-v2.1' targeting specific types of data related to the descriptions of birds. This model enhances the capability to recognize detailed aspects of avian life, particularly focusing on their nesting and dietary habits.", "## Model Description\n\nThe 'gliner-large-v2.1-bird' model is fine-tuned on synthetic data specifically created to capture the nuances of avian dietary and nesting behaviors. The original model, available at GLiNER GitHub, has been adapted to better understand texts describing birds, by recognizing and categorizing specific labels related to food sources and nesting locations.", "## Labels\n\nThis model recognizes the following labels, with each category described below:\n\n- GENERIC_PLANT_FOOD: Refers to generic categories of plant-based food sources mentioned in bird descriptions.\n \n- GENERIC_ANIMAL_FOOD: Includes broader categories of animal-based food sources.\n \n- PLANT_FOOD: Specific types of plant food.\n \n- SPECIFIC_ANIMAL_FOOD: Specific types of animal-based food sources mentioned, such as insects and types of fish.\n \n- LOCATION_NEST: Describes the typical nesting locations of birds.\n \n- ITEM_NEST: Items used by birds to construct their nests.", "## Sample Data\n\nHere is a sample from the dataset used for training the model:", "## Usage\n\nThis model can be used with the GLiNER library which should be installed via pip as shown below:\n\n\n\nAfter installing the GLiNER library, you can use it to predict entities in texts related to bird descriptions by following these steps:" ]
[ "TAGS\n#pytorch #zero-shot-classification #en #license-apache-2.0 #region-us \n", "# GLiNER-large-v2.1-bird\n\n!gliner bird logo\n\nWelcome to the repository for the 'gliner-large-v2.1-bird' model, a fine-tuned version of the GLiNER 'gliner_large-v2.1' targeting specific types of data related to the descriptions of birds. This model enhances the capability to recognize detailed aspects of avian life, particularly focusing on their nesting and dietary habits.", "## Model Description\n\nThe 'gliner-large-v2.1-bird' model is fine-tuned on synthetic data specifically created to capture the nuances of avian dietary and nesting behaviors. The original model, available at GLiNER GitHub, has been adapted to better understand texts describing birds, by recognizing and categorizing specific labels related to food sources and nesting locations.", "## Labels\n\nThis model recognizes the following labels, with each category described below:\n\n- GENERIC_PLANT_FOOD: Refers to generic categories of plant-based food sources mentioned in bird descriptions.\n \n- GENERIC_ANIMAL_FOOD: Includes broader categories of animal-based food sources.\n \n- PLANT_FOOD: Specific types of plant food.\n \n- SPECIFIC_ANIMAL_FOOD: Specific types of animal-based food sources mentioned, such as insects and types of fish.\n \n- LOCATION_NEST: Describes the typical nesting locations of birds.\n \n- ITEM_NEST: Items used by birds to construct their nests.", "## Sample Data\n\nHere is a sample from the dataset used for training the model:", "## Usage\n\nThis model can be used with the GLiNER library which should be installed via pip as shown below:\n\n\n\nAfter installing the GLiNER library, you can use it to predict entities in texts related to bird descriptions by following these steps:" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-cnn-samsum-summarizer This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1382 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.1243 | 1.0 | 74 | 0.1382 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "facebook/bart-large-cnn", "model-index": [{"name": "bart-cnn-samsum-summarizer", "results": []}]}
Khushi870/bart-cnn-samsum-summarizer
null
[ "transformers", "tensorboard", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:facebook/bart-large-cnn", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:36:17+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us
bart-cnn-samsum-summarizer ========================== This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1382 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bart #text2text-generation #generated_from_trainer #base_model-facebook/bart-large-cnn #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-victor This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - total_train_batch_size: 64 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.41.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.7 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "HuggingFaceM4/idefics2-8b", "model-index": [{"name": "test-victor", "results": []}]}
VictorSanh/test-victor
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:HuggingFaceM4/idefics2-8b", "license:apache-2.0", "region:us" ]
null
2024-04-12T15:38:49+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us
# test-victor This model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - total_train_batch_size: 64 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 1 ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.41.0.dev0 - Pytorch 2.0.1+cu118 - Datasets 2.14.7 - Tokenizers 0.19.1
[ "# test-victor\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 32\n- total_train_batch_size: 64\n- total_eval_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.41.0.dev0\n- Pytorch 2.0.1+cu118\n- Datasets 2.14.7\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #tensorboard #safetensors #generated_from_trainer #base_model-HuggingFaceM4/idefics2-8b #license-apache-2.0 #region-us \n", "# test-victor\n\nThis model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 32\n- total_train_batch_size: 64\n- total_eval_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.41.0.dev0\n- Pytorch 2.0.1+cu118\n- Datasets 2.14.7\n- Tokenizers 0.19.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
BuroIdentidadDigital/pasaporte_Mex_v2
null
[ "transformers", "safetensors", "vision-encoder-decoder", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:40:12+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #vision-encoder-decoder #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
RonanMcGovern/TinyLlama-SFT-Unsloth-LoRA
null
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T15:40:52+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #unsloth #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #unsloth #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<img src="https://cdn-uploads.huggingface.co/production/uploads/65ad2502043d53781aad2ee4/_fMSiKgc791PxxBfubWna.png" alt="image" width="540" height="540" style="margin-bottom: 30px;"> # 💫 Pulsar_7B ⚠️ This is an experimental model! A more compliant, RP-oriented version of [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model), fine-tuned on carefully selected datasets. It's smart, adept at following the desired markdown format and adhering to the provided character card. The first message of the character card significantly influences its writing style. Pulsar_7B pairs well with guidance from CFG Scale and works effectively with [PLists + Ali:Chat](https://wikia.schneedc.com/bot-creation/trappu/introduction) character cards. Pulsar_7B was fine-tuned on the following datasets: - grimulkan/theory-of-mind - grimulkan/physical-reasoning - ResplendentAI/Luna_Alpaca - unalignment/toxic-dpo-v0.2 - kira/math-dpo - athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED ## Quantizations Thanks to mradermacher, static GGUF quants are available [here](https://huggingface.co/mradermacher/Pulsar_7B-GGUF). ## Formatting/Preset Pulsar_7B works well with Alpaca, it's not a picky model when it comes to formatting/preset. Mistral should be compatible too. The custom chat template from [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model) also performs well: ``` {% for message in messages %}{% if message['role'] == 'user' %}{{ '### Instruction:\n' + message['content'] + '\n### Response:\n' }}{% elif message['role'] == 'assistant' %}{{ message['content'] + eos_token}}{% elif message['role'] == 'system' %}{{ '### System:\n' + message['content'] + '\n' }}{% endif %}{% endfor %} ``` <br> # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rmdhirr__Pulsar_7B) | Metric |Value| |---------------------------------|----:| |Avg. |74.23| |AI2 Reasoning Challenge (25-Shot)|69.71| |HellaSwag (10-Shot) |86.99| |MMLU (5-Shot) |63.72| |TruthfulQA (0-shot) |69.28| |Winogrande (5-shot) |84.06| |GSM8k (5-shot) |71.65| --- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="100"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "dpo", "uncensored", "roleplay", "fine-tune"], "datasets": ["grimulkan/theory-of-mind", "grimulkan/physical-reasoning", "ResplendentAI/Luna_Alpaca", "unalignment/toxic-dpo-v0.2", "kira/math-dpo", "athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED"], "base_model": "MTSAIR/multi_verse_model", "model-index": [{"name": "Pulsar_7B", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 69.71, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Pulsar_7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 86.99, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Pulsar_7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 63.72, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Pulsar_7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 69.28}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Pulsar_7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 84.06, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Pulsar_7B", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 71.65, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rmdhirr/Pulsar_7B", "name": "Open LLM Leaderboard"}}]}]}
rmdhirr/Pulsar_7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "text-generation-inference", "unsloth", "trl", "dpo", "uncensored", "roleplay", "fine-tune", "conversational", "en", "dataset:grimulkan/theory-of-mind", "dataset:grimulkan/physical-reasoning", "dataset:ResplendentAI/Luna_Alpaca", "dataset:unalignment/toxic-dpo-v0.2", "dataset:kira/math-dpo", "dataset:athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED", "base_model:MTSAIR/multi_verse_model", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:41:08+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #dpo #uncensored #roleplay #fine-tune #conversational #en #dataset-grimulkan/theory-of-mind #dataset-grimulkan/physical-reasoning #dataset-ResplendentAI/Luna_Alpaca #dataset-unalignment/toxic-dpo-v0.2 #dataset-kira/math-dpo #dataset-athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED #base_model-MTSAIR/multi_verse_model #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
<img src="URL alt="image" width="540" height="540" style="margin-bottom: 30px;"> Pulsar\_7B ========== ️ This is an experimental model! A more compliant, RP-oriented version of MTSAIR/multi\_verse\_model, fine-tuned on carefully selected datasets. It's smart, adept at following the desired markdown format and adhering to the provided character card. The first message of the character card significantly influences its writing style. Pulsar\_7B pairs well with guidance from CFG Scale and works effectively with PLists + Ali:Chat character cards. Pulsar\_7B was fine-tuned on the following datasets: * grimulkan/theory-of-mind * grimulkan/physical-reasoning * ResplendentAI/Luna\_Alpaca * unalignment/toxic-dpo-v0.2 * kira/math-dpo * athirdpath/DPO\_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED Quantizations ------------- Thanks to mradermacher, static GGUF quants are available here. Formatting/Preset ----------------- Pulsar\_7B works well with Alpaca, it's not a picky model when it comes to formatting/preset. Mistral should be compatible too. The custom chat template from MTSAIR/multi\_verse\_model also performs well: Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here --- <img src="URL width="100"/>
[]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #dpo #uncensored #roleplay #fine-tune #conversational #en #dataset-grimulkan/theory-of-mind #dataset-grimulkan/physical-reasoning #dataset-ResplendentAI/Luna_Alpaca #dataset-unalignment/toxic-dpo-v0.2 #dataset-kira/math-dpo #dataset-athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED #base_model-MTSAIR/multi_verse_model #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
RonanMcGovern/TinyLlama-SFT-Unsloth-QLoRA
null
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T15:41:56+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #unsloth #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #unsloth #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
artixjain/diff_instr_model
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:46:51+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
## all lora description you can go these website. ## Overseas website; | web1 | web2 | web3 | web4 | |--------|--------|--------|--------| | [Huggingface](https://huggingface.co/Tasty-Rice) | [Civitai](https://civitai.com/user/Tasty_Rice) | [TensorArt](https://tensor.art/u/678859606205704728) | [shakker](https://www.shakker.ai/userpage/6a54842e3a3a4491854e2b0b9af876e6/publish) | ### China text-to-image generative model Website; | web1 | web2 | |--------|--------| | [LibLibAI](https://www.liblib.art/userpage/99d29ad5db474146abbd70752f534981/publish) | [TusiArt](https://tusi.cn/u/661137278105696783) | [LibLibAI](https://www.liblib.art/userpage/99d29ad5db474146abbd70752f534981/publish)This website will be accept updates, The first. My model is currently only released on the above websites, and the progress is not uniform. If you see it on other websites, it is unauthorized transfer.
{"license": "artistic-2.0", "library_name": "diffusers", "tags": ["art"], "pipeline_tag": "text-to-image"}
Tasty-Rice/lora-all
null
[ "diffusers", "art", "text-to-image", "license:artistic-2.0", "region:us" ]
null
2024-04-12T15:50:05+00:00
[]
[]
TAGS #diffusers #art #text-to-image #license-artistic-2.0 #region-us
all lora description you can go these website. ---------------------------------------------- Overseas website; ----------------- ### China text-to-image generative model Website; LibLibAIThis website will be accept updates, The first. My model is currently only released on the above websites, and the progress is not uniform. If you see it on other websites, it is unauthorized transfer.
[ "### China text-to-image generative model Website;\n\n\n\nLibLibAIThis website will be accept updates, The first.\nMy model is currently only released on the above websites, and the progress is not uniform. If you see it on other websites, it is unauthorized transfer." ]
[ "TAGS\n#diffusers #art #text-to-image #license-artistic-2.0 #region-us \n", "### China text-to-image generative model Website;\n\n\n\nLibLibAIThis website will be accept updates, The first.\nMy model is currently only released on the above websites, and the progress is not uniform. If you see it on other websites, it is unauthorized transfer." ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small Ro - IAzari Modelul dat este o ajustare a modelului [openai/whisper-small](https://huggingface.co/openai/whisper-small) cu datele Common Voice 11.0 Ro. Rezultate: - Pierderi: 0.2959 - (RCG) Rata Cuvintelor Greșite: 20.8078 ## Descrierea Este un model intenționat pentru transcrierea graiului Moldovenesc în text. ## Training procedure ### Hiperparametrii de antrenare Hiperparametri utilizați la antrenare: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 200 - training_steps: 1000 - mixed_precision_training: Native AMP ### Rezultate de antrenare | Pierderi în antrenare | Epoci | Pași | Pierderi în validare | RCG | |:---------------------:|:-----:|:----:|:--------------------:|:-------:| | 0.0562 | 2.99 | 500 | 0.2754 | 21.7093 | | 0.0056 | 5.99 | 1000 | 0.2959 | 20.8078 | ### Versiunile cadrului - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"language": ["ro"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "metrics": ["wer"], "base_model": "openai/whisper-small", "model-index": [{"name": "Whisper Small Ro - IAzari", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 11.0 Ro", "type": "mozilla-foundation/common_voice_11_0", "config": "ro", "split": "None", "args": "config: ro, split: test"}, "metrics": [{"type": "wer", "value": 20.807789397764154, "name": "Wer"}]}]}]}
Yehoward/whisper-small-ro
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "ro", "dataset:mozilla-foundation/common_voice_11_0", "base_model:openai/whisper-small", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-12T15:50:06+00:00
[]
[ "ro" ]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ro #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us
Whisper Small Ro - IAzari ========================= Modelul dat este o ajustare a modelului openai/whisper-small cu datele Common Voice 11.0 Ro. Rezultate: * Pierderi: 0.2959 * (RCG) Rata Cuvintelor Greșite: 20.8078 Descrierea ---------- Este un model intenționat pentru transcrierea graiului Moldovenesc în text. Training procedure ------------------ ### Hiperparametrii de antrenare Hiperparametri utilizați la antrenare: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 200 * training\_steps: 1000 * mixed\_precision\_training: Native AMP ### Rezultate de antrenare ### Versiunile cadrului * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Hiperparametrii de antrenare\n\n\nHiperparametri utilizați la antrenare:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* training\\_steps: 1000\n* mixed\\_precision\\_training: Native AMP", "### Rezultate de antrenare", "### Versiunile cadrului\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #ro #dataset-mozilla-foundation/common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Hiperparametrii de antrenare\n\n\nHiperparametri utilizați la antrenare:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* training\\_steps: 1000\n* mixed\\_precision\\_training: Native AMP", "### Rezultate de antrenare", "### Versiunile cadrului\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-Chat-v0.3 installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/TinyLlama-TinyLlama-1.1B-Chat-v0.3-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-Chat-v0.3") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-Chat-v0.3 before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/TinyLlama-TinyLlama-1.1B-Chat-v0.3-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-12T15:51:32+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-Chat-v0.3 installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-Chat-v0.3 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-Chat-v0.3 installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-Chat-v0.3 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-Chat-v0.3 installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-Chat-v0.3 before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
text-generation
transformers
This is an ExLlamaV2 quantized model in 4bpw of [mpasila/Mistral-7B-Erebus-v3-Instruct-32k](https://huggingface.co/mpasila/Mistral-7B-Erebus-v3-Instruct-32k) using the default calibration dataset. # Original Model card: # Mistral-7B-Erebus-v3-Instruct-32k This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). Merge script copied from this [ichigoberry/pandafish-2-7b-32k](https://huggingface.co/ichigoberry/pandafish-2-7b-32k). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [alpindale/Mistral-7B-v0.2-hf](https://huggingface.co/alpindale/Mistral-7B-v0.2-hf) as a base. ### Models Merged The following models were included in the merge: * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [KoboldAI/Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: alpindale/Mistral-7B-v0.2-hf # No parameters necessary for base model - model: mistralai/Mistral-7B-Instruct-v0.2 parameters: density: 0.53 weight: 0.4 - model: KoboldAI/Mistral-7B-Erebus-v3 parameters: density: 0.53 weight: 0.4 merge_method: dare_ties base_model: alpindale/Mistral-7B-v0.2-hf parameters: int8_mask: true dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["alpindale/Mistral-7B-v0.2-hf", "mistralai/Mistral-7B-Instruct-v0.2", "KoboldAI/Mistral-7B-Erebus-v3"]}
mpasila/Mistral-7B-Erebus-v3-Instruct-32k-exl2-4bpw
null
[ "transformers", "mistral", "text-generation", "mergekit", "merge", "arxiv:2311.03099", "arxiv:2306.01708", "base_model:alpindale/Mistral-7B-v0.2-hf", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:KoboldAI/Mistral-7B-Erebus-v3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T15:52:49+00:00
[ "2311.03099", "2306.01708" ]
[]
TAGS #transformers #mistral #text-generation #mergekit #merge #arxiv-2311.03099 #arxiv-2306.01708 #base_model-alpindale/Mistral-7B-v0.2-hf #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-KoboldAI/Mistral-7B-Erebus-v3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
This is an ExLlamaV2 quantized model in 4bpw of mpasila/Mistral-7B-Erebus-v3-Instruct-32k using the default calibration dataset. # Original Model card: # Mistral-7B-Erebus-v3-Instruct-32k This is a merge of pre-trained language models created using mergekit. Merge script copied from this ichigoberry/pandafish-2-7b-32k. ## Merge Details ### Merge Method This model was merged using the DARE TIES merge method using alpindale/Mistral-7B-v0.2-hf as a base. ### Models Merged The following models were included in the merge: * mistralai/Mistral-7B-Instruct-v0.2 * KoboldAI/Mistral-7B-Erebus-v3 ### Configuration The following YAML configuration was used to produce this model:
[ "# Original Model card:", "# Mistral-7B-Erebus-v3-Instruct-32k\n\nThis is a merge of pre-trained language models created using mergekit.\n\nMerge script copied from this ichigoberry/pandafish-2-7b-32k.", "## Merge Details", "### Merge Method\n\nThis model was merged using the DARE TIES merge method using alpindale/Mistral-7B-v0.2-hf as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* mistralai/Mistral-7B-Instruct-v0.2\n* KoboldAI/Mistral-7B-Erebus-v3", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #mistral #text-generation #mergekit #merge #arxiv-2311.03099 #arxiv-2306.01708 #base_model-alpindale/Mistral-7B-v0.2-hf #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-KoboldAI/Mistral-7B-Erebus-v3 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Original Model card:", "# Mistral-7B-Erebus-v3-Instruct-32k\n\nThis is a merge of pre-trained language models created using mergekit.\n\nMerge script copied from this ichigoberry/pandafish-2-7b-32k.", "## Merge Details", "### Merge Method\n\nThis model was merged using the DARE TIES merge method using alpindale/Mistral-7B-v0.2-hf as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* mistralai/Mistral-7B-Instruct-v0.2\n* KoboldAI/Mistral-7B-Erebus-v3", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
text-generation
peft
### PEFT Fine-Tuned Mistral-7B-Instruct-v0.2 Model # 🇮🇹 Fantastica-7b-Instruct-0.2-Italian 🇮🇹 Fantastica-7b-Instruct-0.2-Italian is an Italian speaking, instruction finetuned, Large Language model. 🇮🇹 # Fantastica-7b-Instruct-0.2-Italian's peculiar features: - Mistral-7B-Instruct-v0.2 v.02 as base. - generation of texts in Italian according to the style of 4 great authors of Italian literature. Alessandro Manzoni, Carlo Dossi, Carlo Collodi and Luigi Pirandello. - generation of novel plots in Italian. - generation of descriptions of paintings in Italian. - Trained on: selected entries from Wikipedia, corpus of Italian books from liberliber (https://liberliber.it/), and selected paragraphs from Alessandro Manzoni, Carlo Dossi, Carlo Collodi e Luigi Pirandello. - Training took 70 hours with Google Colab A100. - First version of a series of models dedicated to Italian literature. - # Notes on sources: - the model makes use of literary works no longer covered by copyright, largely derived from liberliber.it corpus. - the model uses entries selected from the free Wikipedia encyclopedia. # PROMPT FORMAT \<s\>[INST]{instruction}[/INST] # Examples: # INSTRUCTION @Scribis @Carlodossi @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: Un viaggiatore giunge dapprima al Castello di Vindona dove viene accolto freddamente dai servitori. Il viaggiatore entra nella sala del trono e percorre un tappeto di pelle di drago. Il viaggiatore è presentato al signore del castello, uno strano tiranno sulla cui testa sta una corona fatta di scheletri di serpente. # RESPONSE # temperature:0.6 (Style imitation: low. Content consistency: medium) Titolo: Arrivo al castello di Vindone: una sorpresa. Dall'alto d'un colle, un viaggiatore che fa la sua corsa a cavallo, dopo avere superato due guardiani della strada ferrata, entra in un piccolo villaggio, il quale, a vista, pare abbandonato. Egli vede le finestre di un castello che sembra dormire; e, nonostante non sia il giorno, entra in quel castello, attraversa un salotto e entra in una sala, la cui volta è fiorita di rose bianche. Sul tappeto di pelle di drago, si trova sdraiato un signore con la testa coperta da una corona fatta d'scheletri di serpente. È lui. Accetta il viaggiatore; lo conduce nel proprio appartamento; lo mette a sedere accanto a sé, e gli offre del vino. Il viaggiatore, per rispetto, non può rifiutare; e, dopo di aver bevuto, torna a sedersi. Il signore gli fa un breve discorso; e poi, lo fa salire in carrozza, e lo porta in una torre situata di fronte al castello. Il viaggiatore è molto sorpreso di quanto ha visto, ma non sa spiegarlo. E, di lì a poco, si mette a parlare... # temperature: 0.7 (Style imitation: medium; Content consistency: low) Titolo: Viaggio nel regno dominato da mostri e spietati signori\n E lo portava su' braccia, come se fosse stato un neonato, il gran viaggiatore che arrivava alla rocciosa villa di Vindòna, chiusa tra due viali coperti in tutta la loro circonferenza da folte siepi di rosmarino con lassù in cima ciliegie arrostite. La mattina dopo la sua entrata nel Regno dei Cipressi, egli si trovò seduto sopra 'l throno', tra la poltrona d'oro dello Scalco di Ferro e i cavalieri dell''Armata Bianca', i quali facevan finta di non avvertenersene affatto; ma lui era consunto dall'impazienza di poter vedere in viso il suo ostinatissimo nemico, lo Stravagante, a cui non aveva mai risposto neppure all'offesa della parola, né aveva ricambiato mai un saluto, né gli s'era mai voltato alle spalle, ma sempre via fuga come uno sfuggito al terremoto. Quel giorno però, a occhi chiuse, si fece portare sul cimasuolo dal vecchio valletto smemorato degli usci e delle scale, senz'altro vestito dello stesso color del muro grigiastro della torre, il quale soltanto ai salti riusciva ad orientarsi nell'andito buio ai confini dell'anticamera, ove il Re dormiva coi suoi cavalloni sotto la coperta argento di luna. Appena riuscite nelle stanze superiori dell'anticamera, la barbarica presenza dell'intruso, col suo aspetto quasi gigantesco e il suo corpo enorme ingordito dal tempo, c # TRY SETTING DIFFERENT PARAMETERS! # OTHER INTERESTING PROMPTS YOU CAN TRY TO GENERATE ITALIAN TEXTS: # Prompt to generate texts in the style of Carlo Collodi @Scribis @Carlocollodi @italiano. Scrivi un racconto seguendo lo stile letterario vivace e pittoresco di Carlocollodi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}. # Prompt to generate sentences in the style of Luigi Pirandello @Scribis @Luigipirandello @italiano. Migliora lo stile della seguente frase seguendo lo stile letterario grottesco e pittoresco di Luigipirandello di Scribis: {tue indicazioni}. # Prompt to generate texts in the style of Luigi Pirandello @Scribis @Luigipirandello @Italiano. Scrivi una scena nello stile letterario vivace e grottesco di Luigipirandello di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}. # Prompt to generate texts in the style of Carlo Dossi @Scribis @Carlodossi @Italiano. Scrivi una scena nello stile lettarario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}. # Prompt to generate texts in the style of Alessandro Manzoni @Scribis @Alessandromanzoni @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Alessandromanzoni di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}. # Prompt to generate Movie plots (Wikipedia style) @Scribis @Italiano. Raccontami la trama di un film di avventura o di un film storico. @Scribis @Italiano. Riassumi la storia di un film di avventura o di un film storico. @Scribis @Italiano. Esponi l'intreccio di un film di avventura o di un film storico. @Scribis @Italiano. Scrivi la trama di un film di avventura o di un film storico. # Prompt to generate plots of novels (Wikipedia style) @Scribis @Italiano. Raccontami la trama di un romanzo. @Scribis @Italiano. Riassumi la storia di un libro. @Scribis @Italiano. Esponi l'intreccio di un'opera letteraria. @Scribis @Italiano. Scrivi la trama di un romanzo. # Prompt to generate painting descriptions (Wikipedia style) @Scribis @Italiano. Il tuo compito è descrivere un dipinto. @Scribis @Italiano. Descrivi un dipinto. @Scribis @Italiano. Descrivi un celebre dipinto. @Scribis @Italiano. Descrivi un famoso dipinto. # Prompt to generate a Greek mythology topic. (Wikipedia style) @Scribis @Italiano. Parlami di un argomento di mitologia greca. @Scribis @Italiano. Raccontami qualcosa riguardante la mitologia greca. More information coming soon to help you make the best use of the model Democratize AI! ## Limitations - Fantastica-7b-Instruct-0.2-Italian may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs. - The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified. ## Model Author: Fabio Martines <[email protected]>
{"language": ["it"], "license": "apache-2.0", "library_name": "peft", "tags": ["Italian", "PEFT", "Mistral", "finetuning", "Text Generation"], "datasets": ["scribis/Wikipedia_it_Trame_Romanzi", "scribis/Wikipedia-it-Descrizioni-di-Dipinti", "scribis/Wikipedia-it-Trame-di-Film", "scribis/Corpus-Frasi-da-Opere-Letterarie", "scribis/Wikipedia-it-Mitologia-Greca"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "pipeline_tag": "text-generation"}
scribis/Fantastica-7b-Instruct-0.2-Italian
null
[ "peft", "safetensors", "Italian", "PEFT", "Mistral", "finetuning", "Text Generation", "text-generation", "it", "dataset:scribis/Wikipedia_it_Trame_Romanzi", "dataset:scribis/Wikipedia-it-Descrizioni-di-Dipinti", "dataset:scribis/Wikipedia-it-Trame-di-Film", "dataset:scribis/Corpus-Frasi-da-Opere-Letterarie", "dataset:scribis/Wikipedia-it-Mitologia-Greca", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-12T15:54:52+00:00
[]
[ "it" ]
TAGS #peft #safetensors #Italian #PEFT #Mistral #finetuning #Text Generation #text-generation #it #dataset-scribis/Wikipedia_it_Trame_Romanzi #dataset-scribis/Wikipedia-it-Descrizioni-di-Dipinti #dataset-scribis/Wikipedia-it-Trame-di-Film #dataset-scribis/Corpus-Frasi-da-Opere-Letterarie #dataset-scribis/Wikipedia-it-Mitologia-Greca #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
### PEFT Fine-Tuned Mistral-7B-Instruct-v0.2 Model # 🇮🇹 Fantastica-7b-Instruct-0.2-Italian 🇮🇹 Fantastica-7b-Instruct-0.2-Italian is an Italian speaking, instruction finetuned, Large Language model. 🇮🇹 # Fantastica-7b-Instruct-0.2-Italian's peculiar features: - Mistral-7B-Instruct-v0.2 v.02 as base. - generation of texts in Italian according to the style of 4 great authors of Italian literature. Alessandro Manzoni, Carlo Dossi, Carlo Collodi and Luigi Pirandello. - generation of novel plots in Italian. - generation of descriptions of paintings in Italian. - Trained on: selected entries from Wikipedia, corpus of Italian books from liberliber (URL and selected paragraphs from Alessandro Manzoni, Carlo Dossi, Carlo Collodi e Luigi Pirandello. - Training took 70 hours with Google Colab A100. - First version of a series of models dedicated to Italian literature. - # Notes on sources: - the model makes use of literary works no longer covered by copyright, largely derived from URL corpus. - the model uses entries selected from the free Wikipedia encyclopedia. # PROMPT FORMAT \<s\>[INST]{instruction}[/INST] # Examples: # INSTRUCTION @Scribis @Carlodossi @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: Un viaggiatore giunge dapprima al Castello di Vindona dove viene accolto freddamente dai servitori. Il viaggiatore entra nella sala del trono e percorre un tappeto di pelle di drago. Il viaggiatore è presentato al signore del castello, uno strano tiranno sulla cui testa sta una corona fatta di scheletri di serpente. # RESPONSE # temperature:0.6 (Style imitation: low. Content consistency: medium) Titolo: Arrivo al castello di Vindone: una sorpresa. Dall'alto d'un colle, un viaggiatore che fa la sua corsa a cavallo, dopo avere superato due guardiani della strada ferrata, entra in un piccolo villaggio, il quale, a vista, pare abbandonato. Egli vede le finestre di un castello che sembra dormire; e, nonostante non sia il giorno, entra in quel castello, attraversa un salotto e entra in una sala, la cui volta è fiorita di rose bianche. Sul tappeto di pelle di drago, si trova sdraiato un signore con la testa coperta da una corona fatta d'scheletri di serpente. È lui. Accetta il viaggiatore; lo conduce nel proprio appartamento; lo mette a sedere accanto a sé, e gli offre del vino. Il viaggiatore, per rispetto, non può rifiutare; e, dopo di aver bevuto, torna a sedersi. Il signore gli fa un breve discorso; e poi, lo fa salire in carrozza, e lo porta in una torre situata di fronte al castello. Il viaggiatore è molto sorpreso di quanto ha visto, ma non sa spiegarlo. E, di lì a poco, si mette a parlare... # temperature: 0.7 (Style imitation: medium; Content consistency: low) Titolo: Viaggio nel regno dominato da mostri e spietati signori\n E lo portava su' braccia, come se fosse stato un neonato, il gran viaggiatore che arrivava alla rocciosa villa di Vindòna, chiusa tra due viali coperti in tutta la loro circonferenza da folte siepi di rosmarino con lassù in cima ciliegie arrostite. La mattina dopo la sua entrata nel Regno dei Cipressi, egli si trovò seduto sopra 'l throno', tra la poltrona d'oro dello Scalco di Ferro e i cavalieri dell''Armata Bianca', i quali facevan finta di non avvertenersene affatto; ma lui era consunto dall'impazienza di poter vedere in viso il suo ostinatissimo nemico, lo Stravagante, a cui non aveva mai risposto neppure all'offesa della parola, né aveva ricambiato mai un saluto, né gli s'era mai voltato alle spalle, ma sempre via fuga come uno sfuggito al terremoto. Quel giorno però, a occhi chiuse, si fece portare sul cimasuolo dal vecchio valletto smemorato degli usci e delle scale, senz'altro vestito dello stesso color del muro grigiastro della torre, il quale soltanto ai salti riusciva ad orientarsi nell'andito buio ai confini dell'anticamera, ove il Re dormiva coi suoi cavalloni sotto la coperta argento di luna. Appena riuscite nelle stanze superiori dell'anticamera, la barbarica presenza dell'intruso, col suo aspetto quasi gigantesco e il suo corpo enorme ingordito dal tempo, c # TRY SETTING DIFFERENT PARAMETERS! # OTHER INTERESTING PROMPTS YOU CAN TRY TO GENERATE ITALIAN TEXTS: # Prompt to generate texts in the style of Carlo Collodi @Scribis @Carlocollodi @italiano. Scrivi un racconto seguendo lo stile letterario vivace e pittoresco di Carlocollodi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}. # Prompt to generate sentences in the style of Luigi Pirandello @Scribis @Luigipirandello @italiano. Migliora lo stile della seguente frase seguendo lo stile letterario grottesco e pittoresco di Luigipirandello di Scribis: {tue indicazioni}. # Prompt to generate texts in the style of Luigi Pirandello @Scribis @Luigipirandello @Italiano. Scrivi una scena nello stile letterario vivace e grottesco di Luigipirandello di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}. # Prompt to generate texts in the style of Carlo Dossi @Scribis @Carlodossi @Italiano. Scrivi una scena nello stile lettarario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}. # Prompt to generate texts in the style of Alessandro Manzoni @Scribis @Alessandromanzoni @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Alessandromanzoni di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}. # Prompt to generate Movie plots (Wikipedia style) @Scribis @Italiano. Raccontami la trama di un film di avventura o di un film storico. @Scribis @Italiano. Riassumi la storia di un film di avventura o di un film storico. @Scribis @Italiano. Esponi l'intreccio di un film di avventura o di un film storico. @Scribis @Italiano. Scrivi la trama di un film di avventura o di un film storico. # Prompt to generate plots of novels (Wikipedia style) @Scribis @Italiano. Raccontami la trama di un romanzo. @Scribis @Italiano. Riassumi la storia di un libro. @Scribis @Italiano. Esponi l'intreccio di un'opera letteraria. @Scribis @Italiano. Scrivi la trama di un romanzo. # Prompt to generate painting descriptions (Wikipedia style) @Scribis @Italiano. Il tuo compito è descrivere un dipinto. @Scribis @Italiano. Descrivi un dipinto. @Scribis @Italiano. Descrivi un celebre dipinto. @Scribis @Italiano. Descrivi un famoso dipinto. # Prompt to generate a Greek mythology topic. (Wikipedia style) @Scribis @Italiano. Parlami di un argomento di mitologia greca. @Scribis @Italiano. Raccontami qualcosa riguardante la mitologia greca. More information coming soon to help you make the best use of the model Democratize AI! ## Limitations - Fantastica-7b-Instruct-0.2-Italian may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs. - The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified. ## Model Author: Fabio Martines <scribisonline@URL>
[ "### PEFT Fine-Tuned Mistral-7B-Instruct-v0.2 Model", "# 🇮🇹 Fantastica-7b-Instruct-0.2-Italian 🇮🇹 \n\nFantastica-7b-Instruct-0.2-Italian is an Italian speaking, instruction finetuned, Large Language model. 🇮🇹", "# Fantastica-7b-Instruct-0.2-Italian's peculiar features:\n\n- Mistral-7B-Instruct-v0.2 v.02 as base.\n- generation of texts in Italian according to the style of 4 great authors of Italian literature. Alessandro Manzoni, Carlo Dossi, Carlo Collodi and Luigi Pirandello.\n- generation of novel plots in Italian.\n- generation of descriptions of paintings in Italian.\n- Trained on: selected entries from Wikipedia, corpus of Italian books from liberliber (URL and selected paragraphs from Alessandro Manzoni, Carlo Dossi, Carlo Collodi e Luigi Pirandello.\n- Training took 70 hours with Google Colab A100.\n- First version of a series of models dedicated to Italian literature.\n-", "# Notes on sources:\n- the model makes use of literary works no longer covered by copyright, largely derived from URL corpus.\n- the model uses entries selected from the free Wikipedia encyclopedia.", "# PROMPT FORMAT\n\n\\<s\\>[INST]{instruction}[/INST]", "# Examples:", "# INSTRUCTION\n\n@Scribis @Carlodossi @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: Un viaggiatore giunge dapprima al Castello di Vindona dove viene accolto freddamente dai servitori. Il viaggiatore entra nella sala del trono e percorre un tappeto di pelle di drago. Il viaggiatore è presentato al signore del castello, uno strano tiranno sulla cui testa sta una corona fatta di scheletri di serpente.", "# RESPONSE", "# temperature:0.6 (Style imitation: low. Content consistency: medium)\n\nTitolo: Arrivo al castello di Vindone: una sorpresa. Dall'alto d'un colle, un viaggiatore che fa la sua corsa a cavallo, dopo avere superato due guardiani della strada ferrata, entra in un piccolo villaggio, il quale, a vista, pare abbandonato. Egli vede le finestre di un castello che sembra dormire; e, nonostante non sia il giorno, entra in quel castello, attraversa un salotto e entra in una sala, la cui volta è fiorita di rose bianche. Sul tappeto di pelle di drago, si trova sdraiato un signore con la testa coperta da una corona fatta d'scheletri di serpente. È lui. Accetta il viaggiatore; lo conduce nel proprio appartamento; lo mette a sedere accanto a sé, e gli offre del vino. Il viaggiatore, per rispetto, non può rifiutare; e, dopo di aver bevuto, torna a sedersi. Il signore gli fa un breve discorso; e poi, lo fa salire in carrozza, e lo porta in una torre situata di fronte al castello. Il viaggiatore è molto sorpreso di quanto ha visto, ma non sa spiegarlo. E, di lì a poco, si mette a parlare...", "# temperature: 0.7 (Style imitation: medium; Content consistency: low)\n\nTitolo: Viaggio nel regno dominato da mostri e spietati signori\\n E lo portava su' braccia, come se fosse stato un neonato, il gran viaggiatore che arrivava alla rocciosa villa di Vindòna, chiusa tra due viali coperti in tutta la loro circonferenza da folte siepi di rosmarino con lassù in cima ciliegie arrostite. La mattina dopo la sua entrata nel Regno dei Cipressi, egli si trovò seduto sopra 'l throno', tra la poltrona d'oro dello Scalco di Ferro e i cavalieri dell''Armata Bianca', i quali facevan finta di non avvertenersene affatto; ma lui era consunto dall'impazienza di poter vedere in viso il suo ostinatissimo nemico, lo Stravagante, a cui non aveva mai risposto neppure all'offesa della parola, né aveva ricambiato mai un saluto, né gli s'era mai voltato alle spalle, ma sempre via fuga come uno sfuggito al terremoto. Quel giorno però, a occhi chiuse, si fece portare sul cimasuolo dal vecchio valletto smemorato degli usci e delle scale, senz'altro vestito dello stesso color del muro grigiastro della torre, il quale soltanto ai salti riusciva ad orientarsi nell'andito buio ai confini dell'anticamera, ove il Re dormiva coi suoi cavalloni sotto la coperta argento di luna. Appena riuscite nelle stanze superiori dell'anticamera, la barbarica presenza dell'intruso, col suo aspetto quasi gigantesco e il suo corpo enorme ingordito dal tempo, c", "# TRY SETTING DIFFERENT PARAMETERS!", "# OTHER INTERESTING PROMPTS YOU CAN TRY TO GENERATE ITALIAN TEXTS:", "# Prompt to generate texts in the style of Carlo Collodi\n\n@Scribis @Carlocollodi @italiano. Scrivi un racconto seguendo lo stile letterario vivace e pittoresco di Carlocollodi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.", "# Prompt to generate sentences in the style of Luigi Pirandello\n\n@Scribis @Luigipirandello @italiano. Migliora lo stile della seguente frase seguendo lo stile letterario grottesco e pittoresco di Luigipirandello di Scribis: {tue indicazioni}.", "# Prompt to generate texts in the style of Luigi Pirandello\n\n@Scribis @Luigipirandello @Italiano. Scrivi una scena nello stile letterario vivace e grottesco di Luigipirandello di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.", "# Prompt to generate texts in the style of Carlo Dossi\n\n@Scribis @Carlodossi @Italiano. Scrivi una scena nello stile lettarario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.", "# Prompt to generate texts in the style of Alessandro Manzoni\n\n@Scribis @Alessandromanzoni @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Alessandromanzoni di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.", "# Prompt to generate Movie plots (Wikipedia style)\n\n@Scribis @Italiano. Raccontami la trama di un film di avventura o di un film storico.\n@Scribis @Italiano. Riassumi la storia di un film di avventura o di un film storico.\n@Scribis @Italiano. Esponi l'intreccio di un film di avventura o di un film storico.\n@Scribis @Italiano. Scrivi la trama di un film di avventura o di un film storico.", "# Prompt to generate plots of novels (Wikipedia style)\n\n@Scribis @Italiano. Raccontami la trama di un romanzo.\n@Scribis @Italiano. Riassumi la storia di un libro.\n@Scribis @Italiano. Esponi l'intreccio di un'opera letteraria.\n@Scribis @Italiano. Scrivi la trama di un romanzo.", "# Prompt to generate painting descriptions (Wikipedia style)\n\t\t \n@Scribis @Italiano. Il tuo compito è descrivere un dipinto.\n@Scribis @Italiano. Descrivi un dipinto.\n@Scribis @Italiano. Descrivi un celebre dipinto.\n@Scribis @Italiano. Descrivi un famoso dipinto.", "# Prompt to generate a Greek mythology topic. (Wikipedia style)\t\t\n\t\t\t\t\n@Scribis @Italiano. Parlami di un argomento di mitologia greca.\n@Scribis @Italiano. Raccontami qualcosa riguardante la mitologia greca.\t\t\n\nMore information coming soon to help you make the best use of the model\n\nDemocratize AI!", "## Limitations\n\n- Fantastica-7b-Instruct-0.2-Italian may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.\n- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.", "## Model Author:\nFabio Martines <scribisonline@URL>" ]
[ "TAGS\n#peft #safetensors #Italian #PEFT #Mistral #finetuning #Text Generation #text-generation #it #dataset-scribis/Wikipedia_it_Trame_Romanzi #dataset-scribis/Wikipedia-it-Descrizioni-di-Dipinti #dataset-scribis/Wikipedia-it-Trame-di-Film #dataset-scribis/Corpus-Frasi-da-Opere-Letterarie #dataset-scribis/Wikipedia-it-Mitologia-Greca #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "### PEFT Fine-Tuned Mistral-7B-Instruct-v0.2 Model", "# 🇮🇹 Fantastica-7b-Instruct-0.2-Italian 🇮🇹 \n\nFantastica-7b-Instruct-0.2-Italian is an Italian speaking, instruction finetuned, Large Language model. 🇮🇹", "# Fantastica-7b-Instruct-0.2-Italian's peculiar features:\n\n- Mistral-7B-Instruct-v0.2 v.02 as base.\n- generation of texts in Italian according to the style of 4 great authors of Italian literature. Alessandro Manzoni, Carlo Dossi, Carlo Collodi and Luigi Pirandello.\n- generation of novel plots in Italian.\n- generation of descriptions of paintings in Italian.\n- Trained on: selected entries from Wikipedia, corpus of Italian books from liberliber (URL and selected paragraphs from Alessandro Manzoni, Carlo Dossi, Carlo Collodi e Luigi Pirandello.\n- Training took 70 hours with Google Colab A100.\n- First version of a series of models dedicated to Italian literature.\n-", "# Notes on sources:\n- the model makes use of literary works no longer covered by copyright, largely derived from URL corpus.\n- the model uses entries selected from the free Wikipedia encyclopedia.", "# PROMPT FORMAT\n\n\\<s\\>[INST]{instruction}[/INST]", "# Examples:", "# INSTRUCTION\n\n@Scribis @Carlodossi @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: Un viaggiatore giunge dapprima al Castello di Vindona dove viene accolto freddamente dai servitori. Il viaggiatore entra nella sala del trono e percorre un tappeto di pelle di drago. Il viaggiatore è presentato al signore del castello, uno strano tiranno sulla cui testa sta una corona fatta di scheletri di serpente.", "# RESPONSE", "# temperature:0.6 (Style imitation: low. Content consistency: medium)\n\nTitolo: Arrivo al castello di Vindone: una sorpresa. Dall'alto d'un colle, un viaggiatore che fa la sua corsa a cavallo, dopo avere superato due guardiani della strada ferrata, entra in un piccolo villaggio, il quale, a vista, pare abbandonato. Egli vede le finestre di un castello che sembra dormire; e, nonostante non sia il giorno, entra in quel castello, attraversa un salotto e entra in una sala, la cui volta è fiorita di rose bianche. Sul tappeto di pelle di drago, si trova sdraiato un signore con la testa coperta da una corona fatta d'scheletri di serpente. È lui. Accetta il viaggiatore; lo conduce nel proprio appartamento; lo mette a sedere accanto a sé, e gli offre del vino. Il viaggiatore, per rispetto, non può rifiutare; e, dopo di aver bevuto, torna a sedersi. Il signore gli fa un breve discorso; e poi, lo fa salire in carrozza, e lo porta in una torre situata di fronte al castello. Il viaggiatore è molto sorpreso di quanto ha visto, ma non sa spiegarlo. E, di lì a poco, si mette a parlare...", "# temperature: 0.7 (Style imitation: medium; Content consistency: low)\n\nTitolo: Viaggio nel regno dominato da mostri e spietati signori\\n E lo portava su' braccia, come se fosse stato un neonato, il gran viaggiatore che arrivava alla rocciosa villa di Vindòna, chiusa tra due viali coperti in tutta la loro circonferenza da folte siepi di rosmarino con lassù in cima ciliegie arrostite. La mattina dopo la sua entrata nel Regno dei Cipressi, egli si trovò seduto sopra 'l throno', tra la poltrona d'oro dello Scalco di Ferro e i cavalieri dell''Armata Bianca', i quali facevan finta di non avvertenersene affatto; ma lui era consunto dall'impazienza di poter vedere in viso il suo ostinatissimo nemico, lo Stravagante, a cui non aveva mai risposto neppure all'offesa della parola, né aveva ricambiato mai un saluto, né gli s'era mai voltato alle spalle, ma sempre via fuga come uno sfuggito al terremoto. Quel giorno però, a occhi chiuse, si fece portare sul cimasuolo dal vecchio valletto smemorato degli usci e delle scale, senz'altro vestito dello stesso color del muro grigiastro della torre, il quale soltanto ai salti riusciva ad orientarsi nell'andito buio ai confini dell'anticamera, ove il Re dormiva coi suoi cavalloni sotto la coperta argento di luna. Appena riuscite nelle stanze superiori dell'anticamera, la barbarica presenza dell'intruso, col suo aspetto quasi gigantesco e il suo corpo enorme ingordito dal tempo, c", "# TRY SETTING DIFFERENT PARAMETERS!", "# OTHER INTERESTING PROMPTS YOU CAN TRY TO GENERATE ITALIAN TEXTS:", "# Prompt to generate texts in the style of Carlo Collodi\n\n@Scribis @Carlocollodi @italiano. Scrivi un racconto seguendo lo stile letterario vivace e pittoresco di Carlocollodi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.", "# Prompt to generate sentences in the style of Luigi Pirandello\n\n@Scribis @Luigipirandello @italiano. Migliora lo stile della seguente frase seguendo lo stile letterario grottesco e pittoresco di Luigipirandello di Scribis: {tue indicazioni}.", "# Prompt to generate texts in the style of Luigi Pirandello\n\n@Scribis @Luigipirandello @Italiano. Scrivi una scena nello stile letterario vivace e grottesco di Luigipirandello di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.", "# Prompt to generate texts in the style of Carlo Dossi\n\n@Scribis @Carlodossi @Italiano. Scrivi una scena nello stile lettarario ironico ed espressivo di Carlodossi di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.", "# Prompt to generate texts in the style of Alessandro Manzoni\n\n@Scribis @Alessandromanzoni @Italiano. Scrivi una scena nello stile letterario ironico ed espressivo di Alessandromanzoni di Scribis a partire dalle seguenti sintetiche indicazioni: {tue indicazioni}.", "# Prompt to generate Movie plots (Wikipedia style)\n\n@Scribis @Italiano. Raccontami la trama di un film di avventura o di un film storico.\n@Scribis @Italiano. Riassumi la storia di un film di avventura o di un film storico.\n@Scribis @Italiano. Esponi l'intreccio di un film di avventura o di un film storico.\n@Scribis @Italiano. Scrivi la trama di un film di avventura o di un film storico.", "# Prompt to generate plots of novels (Wikipedia style)\n\n@Scribis @Italiano. Raccontami la trama di un romanzo.\n@Scribis @Italiano. Riassumi la storia di un libro.\n@Scribis @Italiano. Esponi l'intreccio di un'opera letteraria.\n@Scribis @Italiano. Scrivi la trama di un romanzo.", "# Prompt to generate painting descriptions (Wikipedia style)\n\t\t \n@Scribis @Italiano. Il tuo compito è descrivere un dipinto.\n@Scribis @Italiano. Descrivi un dipinto.\n@Scribis @Italiano. Descrivi un celebre dipinto.\n@Scribis @Italiano. Descrivi un famoso dipinto.", "# Prompt to generate a Greek mythology topic. (Wikipedia style)\t\t\n\t\t\t\t\n@Scribis @Italiano. Parlami di un argomento di mitologia greca.\n@Scribis @Italiano. Raccontami qualcosa riguardante la mitologia greca.\t\t\n\nMore information coming soon to help you make the best use of the model\n\nDemocratize AI!", "## Limitations\n\n- Fantastica-7b-Instruct-0.2-Italian may not handle complex or nuanced queries well and may struggle with ambiguous or poorly formatted inputs.\n- The model may generate responses that are factually incorrect or nonsensical. It should be used with caution, and outputs should be carefully verified.", "## Model Author:\nFabio Martines <scribisonline@URL>" ]
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # biogpt-adverse-ner This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1600 - Precision: 0.4255 - Recall: 0.5280 - F1: 0.4712 - Accuracy: 0.9471 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 449 | 0.1879 | 0.3350 | 0.3187 | 0.3266 | 0.9329 | | 0.2647 | 2.0 | 898 | 0.1653 | 0.3653 | 0.4664 | 0.4097 | 0.9430 | | 0.159 | 3.0 | 1347 | 0.1600 | 0.4255 | 0.5280 | 0.4712 | 0.9471 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "microsoft/biogpt", "model-index": [{"name": "biogpt-adverse-ner", "results": []}]}
akoraingdkb/biogpt-adverse-ner
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "token-classification", "generated_from_trainer", "base_model:microsoft/biogpt", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T15:58:05+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
biogpt-adverse-ner ================== This model is a fine-tuned version of microsoft/biogpt on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1600 * Precision: 0.4255 * Recall: 0.5280 * F1: 0.4712 * Accuracy: 0.9471 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #gpt2 #token-classification #generated_from_trainer #base_model-microsoft/biogpt #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
reinforcement-learning
stable-baselines3
# **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.17 +/- 0.11", "name": "mean_reward", "verified": false}]}]}]}
HaitingHW/a2c-PandaReachDense-v3
null
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-12T16:03:26+00:00
[]
[]
TAGS #stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# A2C Agent playing PandaReachDense-v3 This is a trained model of a A2C agent playing PandaReachDense-v3 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
null
transformers
# Mantis: Interleaved Multi-Image Instruction Tuning **Mantis** is a multimodal conversational AI model that can chat with users about images and text. It's optimized for multi-image reasoning, where interleaved text and images can be used to generate responses. Mantis is trained on the newly curated dataset **Mantis-Instruct**, a large-scale multi-image QA dataset that covers various multi-image reasoning tasks. Mantis is an active work in progress. Check our [Blog](https://tiger-ai-lab.github.io/Blog/mantis) for more details! |[Demo](https://huggingface.co/spaces/TIGER-Lab/Mantis) | [Blog](https://tiger-ai-lab.github.io/Blog/mantis) | [Github](https://github.com/TIGER-AI-Lab/Mantis) | [Models](https://huggingface.co/collections/TIGER-Lab/mantis-6619b0834594c878cdb1d6e4) | ![Mantis](https://raw.githubusercontent.com/TIGER-AI-Lab/Mantis/main/docs/assets/images/overall_barchart.jpeg) ## Inference You can install Mantis's GitHub codes as a Python package ```bash pip install git+https://github.com/TIGER-AI-Lab/Mantis.git ``` then run inference with codes here: [examples/run_mantis.py](https://github.com/TIGER-AI-Lab/Mantis/blob/main/examples/run_mantis_hf.py) ```python from mantis.models.mllava import chat_mllava from PIL import Image import torch image1 = "image1.jpg" image2 = "image2.jpg" images = [Image.open(image1), Image.open(image2)] # load processor and model from mantis.models.mllava import MLlavaProcessor, LlavaForConditionalGeneration processor = MLlavaProcessor.from_pretrained("TIGER-Lab/Mantis-bakllava-7b") model = LlavaForConditionalGeneration.from_pretrained("TIGER-Lab/Mantis-bakllava-7b", device_map="auto", torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2") # chat text = "<image> <image> What's the difference between these two images? Please describe as much as you can." response, history = chat_mllava(text, images, model, processor) print("USER: ", text) print("ASSISTANT: ", response) # The image on the right has a larger number of wallets displayed compared to the image on the left. The wallets in the right image are arranged in a grid pattern, while the wallets in the left image are displayed in a more scattered manner. The wallets in the right image have various colors, including red, purple, and brown, while the wallets in the left image are primarily brown. text = "How many items are there in image 1 and image 2 respectively?" response, history = chat_mllava(text, images, model, processor, history=history) print("USER: ", text) print("ASSISTANT: ", response) # There are two items in image 1 and four items in image 2. ``` Or, you can run the model without relying on the mantis codes, using pure hugging face transformers. See [examples/run_mantis_hf.py](https://github.com/TIGER-AI-Lab/Mantis/blob/main/examples/run_mantis_hf.py) for details. ## Training Training codes will be released soon.
{"language": ["en"], "license": "apache-2.0", "tags": ["Mantis", "VLM", "LMM", "Multimodal LLM", "bakllava"], "base_model": "llava-hf/bakLlava-v1-hf", "model-index": [{"name": "Mantis-bakllava-7b", "results": []}]}
TIGER-Lab/Mantis-bakllava-7b
null
[ "transformers", "safetensors", "llava", "pretraining", "Mantis", "VLM", "LMM", "Multimodal LLM", "bakllava", "en", "base_model:llava-hf/bakLlava-v1-hf", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:04:17+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llava #pretraining #Mantis #VLM #LMM #Multimodal LLM #bakllava #en #base_model-llava-hf/bakLlava-v1-hf #license-apache-2.0 #endpoints_compatible #region-us
# Mantis: Interleaved Multi-Image Instruction Tuning Mantis is a multimodal conversational AI model that can chat with users about images and text. It's optimized for multi-image reasoning, where interleaved text and images can be used to generate responses. Mantis is trained on the newly curated dataset Mantis-Instruct, a large-scale multi-image QA dataset that covers various multi-image reasoning tasks. Mantis is an active work in progress. Check our Blog for more details! |Demo | Blog | Github | Models | !Mantis ## Inference You can install Mantis's GitHub codes as a Python package then run inference with codes here: examples/run_mantis.py Or, you can run the model without relying on the mantis codes, using pure hugging face transformers. See examples/run_mantis_hf.py for details. ## Training Training codes will be released soon.
[ "# Mantis: Interleaved Multi-Image Instruction Tuning\n\nMantis is a multimodal conversational AI model that can chat with users about images and text. It's optimized for multi-image reasoning, where interleaved text and images can be used to generate responses.\n\nMantis is trained on the newly curated dataset Mantis-Instruct, a large-scale multi-image QA dataset that covers various multi-image reasoning tasks.\n\nMantis is an active work in progress. Check our Blog for more details!\n\n|Demo | Blog | Github | Models | \n\n!Mantis", "## Inference\n\nYou can install Mantis's GitHub codes as a Python package\n\nthen run inference with codes here: examples/run_mantis.py\n\n\n\nOr, you can run the model without relying on the mantis codes, using pure hugging face transformers. See examples/run_mantis_hf.py for details.", "## Training\nTraining codes will be released soon." ]
[ "TAGS\n#transformers #safetensors #llava #pretraining #Mantis #VLM #LMM #Multimodal LLM #bakllava #en #base_model-llava-hf/bakLlava-v1-hf #license-apache-2.0 #endpoints_compatible #region-us \n", "# Mantis: Interleaved Multi-Image Instruction Tuning\n\nMantis is a multimodal conversational AI model that can chat with users about images and text. It's optimized for multi-image reasoning, where interleaved text and images can be used to generate responses.\n\nMantis is trained on the newly curated dataset Mantis-Instruct, a large-scale multi-image QA dataset that covers various multi-image reasoning tasks.\n\nMantis is an active work in progress. Check our Blog for more details!\n\n|Demo | Blog | Github | Models | \n\n!Mantis", "## Inference\n\nYou can install Mantis's GitHub codes as a Python package\n\nthen run inference with codes here: examples/run_mantis.py\n\n\n\nOr, you can run the model without relying on the mantis codes, using pure hugging face transformers. See examples/run_mantis_hf.py for details.", "## Training\nTraining codes will be released soon." ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results This model is a fine-tuned version of [eugenesiow/bart-paraphrase](https://huggingface.co/eugenesiow/bart-paraphrase) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.1435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.2622 | 0.9968 | 154 | 4.8920 | | 4.4409 | 2.0 | 309 | 3.8232 | | 3.8001 | 2.9968 | 463 | 3.3055 | | 3.4922 | 4.0 | 618 | 2.9036 | | 3.2041 | 4.9968 | 772 | 2.6903 | | 3.0911 | 6.0 | 927 | 2.4548 | | 2.7878 | 6.9968 | 1081 | 2.4292 | | 2.7969 | 8.0 | 1236 | 2.2862 | | 2.5457 | 8.9968 | 1390 | 2.1846 | | 2.6063 | 9.9676 | 1540 | 2.2019 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "eugenesiow/bart-paraphrase", "model-index": [{"name": "results", "results": []}]}
Adokato/results
null
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "base_model:eugenesiow/bart-paraphrase", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:07:12+00:00
[]
[]
TAGS #transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-eugenesiow/bart-paraphrase #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
results ======= This model is a fine-tuned version of eugenesiow/bart-paraphrase on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.1435 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.40.0 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.19.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #safetensors #bart #text2text-generation #generated_from_trainer #base_model-eugenesiow/bart-paraphrase #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.40.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.19.1" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mT5-TextSimp-LT-BatchSize2-lr1e-4 This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0672 - Rouge1: 0.7548 - Rouge2: 0.5989 - Rougel: 0.7509 - Sacrebleu: 49.0373 - Gen Len: 38.0501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Sacrebleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 25.6783 | 0.24 | 200 | 16.0497 | 0.0109 | 0.0005 | 0.0107 | 0.0029 | 512.0 | | 1.9593 | 0.48 | 400 | 0.7780 | 0.014 | 0.0005 | 0.0136 | 0.0146 | 42.685 | | 0.2778 | 0.72 | 600 | 0.1429 | 0.4924 | 0.3128 | 0.4803 | 20.3057 | 38.0382 | | 0.1325 | 0.96 | 800 | 0.1039 | 0.6193 | 0.4369 | 0.6098 | 33.687 | 38.0501 | | 0.1702 | 1.2 | 1000 | 0.0958 | 0.6697 | 0.5016 | 0.6613 | 38.0391 | 38.0501 | | 0.13 | 1.44 | 1200 | 0.0880 | 0.6737 | 0.5051 | 0.6644 | 38.62 | 38.0501 | | 0.1086 | 1.67 | 1400 | 0.0839 | 0.6964 | 0.5326 | 0.6884 | 40.9056 | 38.0501 | | 0.0716 | 1.91 | 1600 | 0.0859 | 0.6933 | 0.5298 | 0.6862 | 40.7158 | 38.0501 | | 0.1135 | 2.15 | 1800 | 0.0820 | 0.7017 | 0.5366 | 0.6936 | 40.7484 | 38.0501 | | 0.0997 | 2.39 | 2000 | 0.0814 | 0.7011 | 0.5351 | 0.6945 | 41.1948 | 38.0501 | | 0.0996 | 2.63 | 2200 | 0.0774 | 0.7103 | 0.5522 | 0.7049 | 42.5756 | 38.0501 | | 1.1379 | 2.87 | 2400 | 0.0763 | 0.7211 | 0.5556 | 0.7152 | 43.2411 | 38.0501 | | 0.0594 | 3.11 | 2600 | 0.0776 | 0.7261 | 0.5647 | 0.7201 | 44.2205 | 38.0501 | | 0.0763 | 3.35 | 2800 | 0.0736 | 0.7309 | 0.5709 | 0.7251 | 45.2825 | 38.0501 | | 0.1641 | 3.59 | 3000 | 0.0722 | 0.7297 | 0.5685 | 0.7242 | 44.9001 | 38.0501 | | 0.1085 | 3.83 | 3200 | 0.0703 | 0.7377 | 0.5793 | 0.7319 | 45.7504 | 38.0501 | | 0.0573 | 4.07 | 3400 | 0.0719 | 0.7393 | 0.5796 | 0.7335 | 45.86 | 38.0501 | | 0.1149 | 4.31 | 3600 | 0.0705 | 0.7415 | 0.5787 | 0.7365 | 46.2652 | 38.0501 | | 0.0843 | 4.55 | 3800 | 0.0703 | 0.7385 | 0.5754 | 0.7326 | 46.5292 | 38.0501 | | 0.0658 | 4.78 | 4000 | 0.0705 | 0.7437 | 0.5855 | 0.7384 | 46.864 | 38.0501 | | 0.0676 | 5.02 | 4200 | 0.0694 | 0.7437 | 0.584 | 0.7384 | 47.1268 | 38.0501 | | 0.0657 | 5.26 | 4400 | 0.0711 | 0.7473 | 0.5913 | 0.7432 | 47.4413 | 38.0501 | | 0.0679 | 5.5 | 4600 | 0.0702 | 0.7496 | 0.5908 | 0.7446 | 47.8281 | 38.0501 | | 0.0664 | 5.74 | 4800 | 0.0671 | 0.7511 | 0.5929 | 0.7463 | 47.7693 | 38.0501 | | 0.0446 | 5.98 | 5000 | 0.0685 | 0.7533 | 0.5932 | 0.7478 | 48.032 | 38.0501 | | 0.0732 | 6.22 | 5200 | 0.0678 | 0.7523 | 0.5948 | 0.7472 | 48.3467 | 38.0501 | | 0.0706 | 6.46 | 5400 | 0.0672 | 0.755 | 0.5983 | 0.7507 | 48.6158 | 38.0501 | | 0.051 | 6.7 | 5600 | 0.0674 | 0.7523 | 0.5961 | 0.7478 | 48.4828 | 38.0501 | | 0.067 | 6.94 | 5800 | 0.0681 | 0.7532 | 0.5978 | 0.7492 | 48.7253 | 38.0501 | | 0.075 | 7.18 | 6000 | 0.0684 | 0.7534 | 0.5969 | 0.7492 | 48.7053 | 38.0501 | | 0.1323 | 7.42 | 6200 | 0.0671 | 0.755 | 0.5991 | 0.7511 | 48.9922 | 38.0501 | | 0.0383 | 7.66 | 6400 | 0.0671 | 0.7551 | 0.5994 | 0.7511 | 49.0028 | 38.0501 | | 0.0599 | 7.89 | 6600 | 0.0672 | 0.7548 | 0.5989 | 0.7509 | 49.0373 | 38.0501 | ### Framework versions - Transformers 4.33.0 - Pytorch 2.1.2+cu121 - Datasets 2.14.4 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge", "sacrebleu"], "base_model": "google/mt5-base", "model-index": [{"name": "mT5-TextSimp-LT-BatchSize2-lr1e-4", "results": []}]}
eglkan1/mT5-TextSimp-LT-BatchSize2-lr1e-4
null
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "base_model:google/mt5-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T16:07:47+00:00
[]
[]
TAGS #transformers #pytorch #t5 #text2text-generation #generated_from_trainer #base_model-google/mt5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
mT5-TextSimp-LT-BatchSize2-lr1e-4 ================================= This model is a fine-tuned version of google/mt5-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.0672 * Rouge1: 0.7548 * Rouge2: 0.5989 * Rougel: 0.7509 * Sacrebleu: 49.0373 * Gen Len: 38.0501 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 8 ### Training results ### Framework versions * Transformers 4.33.0 * Pytorch 2.1.2+cu121 * Datasets 2.14.4 * Tokenizers 0.13.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 8", "### Training results", "### Framework versions\n\n\n* Transformers 4.33.0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.4\n* Tokenizers 0.13.3" ]
[ "TAGS\n#transformers #pytorch #t5 #text2text-generation #generated_from_trainer #base_model-google/mt5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 8", "### Training results", "### Framework versions\n\n\n* Transformers 4.33.0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.4\n* Tokenizers 0.13.3" ]
null
null
# CivitAI Semi-Permeable Membrane (SPM) Training Using the [original](https://github.com/Con6924/SPM) code provided by the authors of the [One-Dimensional Adapter to Rule Them All: Concepts, Diffusion Models and Erasing Applications](https://arxiv.org/abs/2312.16145) paper, this repository equips you with configuration files to train SPMs on various diffusion models like SD1.5, SDXL, and Pony Diffusion. ## Our Approach with SPMs At CivitAI, we leverage SPMs for content moderation to ensure that no CSAM or Toxic Mature Content is generated via our on-site generator. Our unique approach involves training multiple SPMs on distinct concepts and then merging these models into one just like you would do with multiple LoRAs. This method enhances the adaptability and effectiveness of our models in handling diverse content moderation needs. ## Using the SPMs from this repository Load them just like conventional LoRAs either with a diffusers pipeline or with your favorite app like ComfyUI. ### Recommended weights |File|Recommended Weight| |----|------------------| |CSAM_SD15|2.5| |CSAM_SDXL|2.5| |MATURE_CONTENT_SD15|3| |MATURE_CONTENT_SDXL|5| ## Creating Your Composite SPM To emulate CivitAI's SPMs setup, you should train individual SPMs on various concepts initially. Post-training, these models can be merged to form a comprehensive, unified model capable of sophisticated content moderation across different scenarios and diffusion models. ### Steps to Create a Composite SPM: 1. **Train Individual Models**: Start by training separate SPMs on different concept. Each model specializes in recognizing and moderating specific content types. 2. **Merge Models**: Combine these trained models using a methodology akin to LoRA merging. 3. **Evaluate and Iterate**: Test the combined model's effectiveness across scenarios that the individual models were trained on, and iteratively refine the merging process for improved performance. By following these steps, you can create a robust and versatile SPM that mirrors the functionality of CivitAI’s own systems, ensuring effective moderation across multiple diffusion platforms. ## Training Recommended VRAM |Base Model|Recommended VRAM| |----------|----------------| |SD 1.5|20GB| |SDXL|48GB| --- For more information, please see our [github](https://github.com/civitai/SPM) ## Acknowledgements: This repo and methodology was pioneered by [Lyu et al.](https://github.com/Con6924/SPM). Please see their original paper and repo for more information.
{"license": "apache-2.0"}
Civitai/SPM
null
[ "arxiv:2312.16145", "license:apache-2.0", "region:us" ]
null
2024-04-12T16:08:22+00:00
[ "2312.16145" ]
[]
TAGS #arxiv-2312.16145 #license-apache-2.0 #region-us
CivitAI Semi-Permeable Membrane (SPM) Training ============================================== Using the original code provided by the authors of the One-Dimensional Adapter to Rule Them All: Concepts, Diffusion Models and Erasing Applications paper, this repository equips you with configuration files to train SPMs on various diffusion models like SD1.5, SDXL, and Pony Diffusion. Our Approach with SPMs ---------------------- At CivitAI, we leverage SPMs for content moderation to ensure that no CSAM or Toxic Mature Content is generated via our on-site generator. Our unique approach involves training multiple SPMs on distinct concepts and then merging these models into one just like you would do with multiple LoRAs. This method enhances the adaptability and effectiveness of our models in handling diverse content moderation needs. Using the SPMs from this repository ----------------------------------- Load them just like conventional LoRAs either with a diffusers pipeline or with your favorite app like ComfyUI. ### Recommended weights Creating Your Composite SPM --------------------------- To emulate CivitAI's SPMs setup, you should train individual SPMs on various concepts initially. Post-training, these models can be merged to form a comprehensive, unified model capable of sophisticated content moderation across different scenarios and diffusion models. ### Steps to Create a Composite SPM: 1. Train Individual Models: Start by training separate SPMs on different concept. Each model specializes in recognizing and moderating specific content types. 2. Merge Models: Combine these trained models using a methodology akin to LoRA merging. 3. Evaluate and Iterate: Test the combined model's effectiveness across scenarios that the individual models were trained on, and iteratively refine the merging process for improved performance. By following these steps, you can create a robust and versatile SPM that mirrors the functionality of CivitAI’s own systems, ensuring effective moderation across multiple diffusion platforms. Training Recommended VRAM ------------------------- --- For more information, please see our github Acknowledgements: ----------------- This repo and methodology was pioneered by Lyu et al.. Please see their original paper and repo for more information.
[ "### Recommended weights\n\n\n\nCreating Your Composite SPM\n---------------------------\n\n\nTo emulate CivitAI's SPMs setup, you should train individual SPMs on various concepts initially. Post-training, these models can be merged to form a comprehensive, unified model capable of sophisticated content moderation across different scenarios and diffusion models.", "### Steps to Create a Composite SPM:\n\n\n1. Train Individual Models: Start by training separate SPMs on different concept. Each model specializes in recognizing and moderating specific content types.\n2. Merge Models: Combine these trained models using a methodology akin to LoRA merging.\n3. Evaluate and Iterate: Test the combined model's effectiveness across scenarios that the individual models were trained on, and iteratively refine the merging process for improved performance.\n\n\nBy following these steps, you can create a robust and versatile SPM that mirrors the functionality of CivitAI’s own systems, ensuring effective moderation across multiple diffusion platforms.\n\n\nTraining Recommended VRAM\n-------------------------\n\n\n\n\n\n---\n\n\nFor more information, please see our github\n\n\nAcknowledgements:\n-----------------\n\n\nThis repo and methodology was pioneered by Lyu et al.. Please see their original paper and repo for more information." ]
[ "TAGS\n#arxiv-2312.16145 #license-apache-2.0 #region-us \n", "### Recommended weights\n\n\n\nCreating Your Composite SPM\n---------------------------\n\n\nTo emulate CivitAI's SPMs setup, you should train individual SPMs on various concepts initially. Post-training, these models can be merged to form a comprehensive, unified model capable of sophisticated content moderation across different scenarios and diffusion models.", "### Steps to Create a Composite SPM:\n\n\n1. Train Individual Models: Start by training separate SPMs on different concept. Each model specializes in recognizing and moderating specific content types.\n2. Merge Models: Combine these trained models using a methodology akin to LoRA merging.\n3. Evaluate and Iterate: Test the combined model's effectiveness across scenarios that the individual models were trained on, and iteratively refine the merging process for improved performance.\n\n\nBy following these steps, you can create a robust and versatile SPM that mirrors the functionality of CivitAI’s own systems, ensuring effective moderation across multiple diffusion platforms.\n\n\nTraining Recommended VRAM\n-------------------------\n\n\n\n\n\n---\n\n\nFor more information, please see our github\n\n\nAcknowledgements:\n-----------------\n\n\nThis repo and methodology was pioneered by Lyu et al.. Please see their original paper and repo for more information." ]
text-generation
mlx
# mlx-community/rho-math-1b-v0.1-4bit This model was converted to MLX format from [`microsoft/rho-math-1b-v0.1`]() using mlx-lm version **0.8.0**. Model added by [Prince Canuma](https://twitter.com/Prince_Canuma). Refer to the [original model card](https://huggingface.co/microsoft/rho-math-1b-v0.1) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/rho-math-1b-v0.1-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "mit", "tags": ["nlp", "math", "mlx"], "pipeline_tag": "text-generation"}
mlx-community/rho-math-1b-v0.1-4bit
null
[ "mlx", "safetensors", "llama", "nlp", "math", "text-generation", "en", "license:mit", "region:us" ]
null
2024-04-12T16:08:31+00:00
[]
[ "en" ]
TAGS #mlx #safetensors #llama #nlp #math #text-generation #en #license-mit #region-us
# mlx-community/rho-math-1b-v0.1-4bit This model was converted to MLX format from ['microsoft/rho-math-1b-v0.1']() using mlx-lm version 0.8.0. Model added by Prince Canuma. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/rho-math-1b-v0.1-4bit\nThis model was converted to MLX format from ['microsoft/rho-math-1b-v0.1']() using mlx-lm version 0.8.0.\n\nModel added by Prince Canuma.\n\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #llama #nlp #math #text-generation #en #license-mit #region-us \n", "# mlx-community/rho-math-1b-v0.1-4bit\nThis model was converted to MLX format from ['microsoft/rho-math-1b-v0.1']() using mlx-lm version 0.8.0.\n\nModel added by Prince Canuma.\n\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text-generation
mlx
# mlx-community/rho-math-7b-v0.1-4bit This model was converted to MLX format from [`microsoft/rho-math-7b-v0.1`]() using mlx-lm version **0.8.0**. Model added by [Prince Canuma](https://twitter.com/Prince_Canuma). Refer to the [original model card](https://huggingface.co/microsoft/rho-math-7b-v0.1) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/rho-math-7b-v0.1-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "mit", "tags": ["nlp", "math", "mlx"], "pipeline_tag": "text-generation"}
mlx-community/rho-math-7b-v0.1-4bit
null
[ "mlx", "safetensors", "mistral", "nlp", "math", "text-generation", "en", "license:mit", "region:us" ]
null
2024-04-12T16:08:54+00:00
[]
[ "en" ]
TAGS #mlx #safetensors #mistral #nlp #math #text-generation #en #license-mit #region-us
# mlx-community/rho-math-7b-v0.1-4bit This model was converted to MLX format from ['microsoft/rho-math-7b-v0.1']() using mlx-lm version 0.8.0. Model added by Prince Canuma. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/rho-math-7b-v0.1-4bit\nThis model was converted to MLX format from ['microsoft/rho-math-7b-v0.1']() using mlx-lm version 0.8.0.\n\nModel added by Prince Canuma.\n\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #mistral #nlp #math #text-generation #en #license-mit #region-us \n", "# mlx-community/rho-math-7b-v0.1-4bit\nThis model was converted to MLX format from ['microsoft/rho-math-7b-v0.1']() using mlx-lm version 0.8.0.\n\nModel added by Prince Canuma.\n\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Grayx/unstable_91
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:09:04+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
mlx
# mlx-community/rho-math-7b-interpreter-v0.1-4bit This model was converted to MLX format from [`microsoft/rho-math-7b-interpreter-v0.1`]() using mlx-lm version **0.8.0**. Model added by [Prince Canuma](https://twitter.com/Prince_Canuma). Refer to the [original model card](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/rho-math-7b-interpreter-v0.1-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "mit", "tags": ["nlp", "math", "mlx"], "pipeline_tag": "text-generation"}
mlx-community/rho-math-7b-interpreter-v0.1-4bit
null
[ "mlx", "safetensors", "mistral", "nlp", "math", "text-generation", "en", "license:mit", "region:us" ]
null
2024-04-12T16:09:22+00:00
[]
[ "en" ]
TAGS #mlx #safetensors #mistral #nlp #math #text-generation #en #license-mit #region-us
# mlx-community/rho-math-7b-interpreter-v0.1-4bit This model was converted to MLX format from ['microsoft/rho-math-7b-interpreter-v0.1']() using mlx-lm version 0.8.0. Model added by Prince Canuma. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/rho-math-7b-interpreter-v0.1-4bit\nThis model was converted to MLX format from ['microsoft/rho-math-7b-interpreter-v0.1']() using mlx-lm version 0.8.0.\n\nModel added by Prince Canuma.\n\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #mistral #nlp #math #text-generation #en #license-mit #region-us \n", "# mlx-community/rho-math-7b-interpreter-v0.1-4bit\nThis model was converted to MLX format from ['microsoft/rho-math-7b-interpreter-v0.1']() using mlx-lm version 0.8.0.\n\nModel added by Prince Canuma.\n\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
text-generation
mlx
# mlx-community/rho-math-1b-interpreter-v0.1-4bit This model was converted to MLX format from [`microsoft/rho-math-1b-interpreter-v0.1`]() using mlx-lm version **0.8.0**. Model added by [Prince Canuma](https://twitter.com/Prince_Canuma). Refer to the [original model card](https://huggingface.co/microsoft/rho-math-1b-interpreter-v0.1) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("mlx-community/rho-math-1b-interpreter-v0.1-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "mit", "tags": ["nlp", "math", "mlx"], "pipeline_tag": "text-generation"}
mlx-community/rho-math-1b-interpreter-v0.1-4bit
null
[ "mlx", "safetensors", "llama", "nlp", "math", "text-generation", "en", "license:mit", "region:us" ]
null
2024-04-12T16:09:47+00:00
[]
[ "en" ]
TAGS #mlx #safetensors #llama #nlp #math #text-generation #en #license-mit #region-us
# mlx-community/rho-math-1b-interpreter-v0.1-4bit This model was converted to MLX format from ['microsoft/rho-math-1b-interpreter-v0.1']() using mlx-lm version 0.8.0. Model added by Prince Canuma. Refer to the original model card for more details on the model. ## Use with mlx
[ "# mlx-community/rho-math-1b-interpreter-v0.1-4bit\nThis model was converted to MLX format from ['microsoft/rho-math-1b-interpreter-v0.1']() using mlx-lm version 0.8.0.\n\nModel added by Prince Canuma.\n\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
[ "TAGS\n#mlx #safetensors #llama #nlp #math #text-generation #en #license-mit #region-us \n", "# mlx-community/rho-math-1b-interpreter-v0.1-4bit\nThis model was converted to MLX format from ['microsoft/rho-math-1b-interpreter-v0.1']() using mlx-lm version 0.8.0.\n\nModel added by Prince Canuma.\n\nRefer to the original model card for more details on the model.", "## Use with mlx" ]
null
null
# bibidentuhanoi/BMO-7B-Instruct_2-Q6_K-GGUF This model was converted to GGUF format from [`bibidentuhanoi/BMO-7B-Instruct_2`](https://huggingface.co/bibidentuhanoi/BMO-7B-Instruct_2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/bibidentuhanoi/BMO-7B-Instruct_2) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo bibidentuhanoi/BMO-7B-Instruct_2-Q6_K-GGUF --model bmo-7b-instruct_2.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo bibidentuhanoi/BMO-7B-Instruct_2-Q6_K-GGUF --model bmo-7b-instruct_2.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m bmo-7b-instruct_2.Q6_K.gguf -n 128 ```
{"tags": ["llama-cpp", "gguf-my-repo"]}
bibidentuhanoi/BMO-7B-Instruct_2-Q6_K-GGUF
null
[ "gguf", "llama-cpp", "gguf-my-repo", "region:us" ]
null
2024-04-12T16:10:08+00:00
[]
[]
TAGS #gguf #llama-cpp #gguf-my-repo #region-us
# bibidentuhanoi/BMO-7B-Instruct_2-Q6_K-GGUF This model was converted to GGUF format from 'bibidentuhanoi/BMO-7B-Instruct_2' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# bibidentuhanoi/BMO-7B-Instruct_2-Q6_K-GGUF\nThis model was converted to GGUF format from 'bibidentuhanoi/BMO-7B-Instruct_2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#gguf #llama-cpp #gguf-my-repo #region-us \n", "# bibidentuhanoi/BMO-7B-Instruct_2-Q6_K-GGUF\nThis model was converted to GGUF format from 'bibidentuhanoi/BMO-7B-Instruct_2' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with gptq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-step-50K-105b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install auto-gptq; pip install git+https://github.com/huggingface/optimum.git; pip install git+https://github.com/huggingface/transformers.git; pip install --upgrade accelerate ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("PrunaAI/TinyLlama-TinyLlama-1.1B-step-50K-105b-GPTQ-8bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("TinyLlama/TinyLlama-1.1B-step-50K-105b") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-step-50K-105b before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/TinyLlama-TinyLlama-1.1B-step-50K-105b-GPTQ-8bit-smashed
null
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-12T16:11:06+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
<div style="width: auto; margin-left: auto; margin-right: auto"> <a href="URL target="_blank" rel="noopener noreferrer"> <img src="https://i.URL alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next here. - Request access to easily compress your *own* AI models here. - Read the documentations to know more here - Join Pruna AI community on Discord here to share feedback/suggestions or get help. ## Results !image info Frequently Asked Questions - *How does the compression work?* The model is compressed with gptq. - *How does the model quality change?* The quality of the model output might vary compared to the base model. - *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - *What is the model format?* We use safetensors. - *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. - *What is the naming convention for Pruna Huggingface models?* We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. - *What are "first" metrics?* Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - *What are "Sync" and "Async" metrics?* "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-step-50K-105b installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. 2. Load & run the model. ## Configurations The configuration info are in 'smash_config.json'. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-step-50K-105b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next here. - Request access to easily compress your own AI models here.
[ "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-step-50K-105b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-step-50K-105b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #pruna-ai #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Simply make AI models cheaper, smaller, faster, and greener!\n\n- Give a thumbs up if you like this model!\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your *own* AI models here.\n- Read the documentations to know more here\n- Join Pruna AI community on Discord here to share feedback/suggestions or get help.", "## Results\n\n!image info\n\nFrequently Asked Questions\n- *How does the compression work?* The model is compressed with gptq.\n- *How does the model quality change?* The quality of the model output might vary compared to the base model.\n- *How is the model efficiency evaluated?* These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in 'model/smash_config.json' and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.\n- *What is the model format?* We use safetensors.\n- *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data.\n- *What is the naming convention for Pruna Huggingface models?* We take the original model name and append \"turbo\", \"tiny\", or \"green\" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.\n- *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here.\n- *What are \"first\" metrics?* Results mentioning \"first\" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.\n- *What are \"Sync\" and \"Async\" metrics?* \"Sync\" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. \"Async\" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.", "## Setup\n\nYou can run the smashed model with these steps:\n\n0. Check requirements from the original repo TinyLlama/TinyLlama-1.1B-step-50K-105b installed. In particular, check python, cuda, and transformers versions.\n1. Make sure that you have installed quantization related packages.\n \n2. Load & run the model.", "## Configurations\n\nThe configuration info are in 'smash_config.json'.", "## Credits & License\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model TinyLlama/TinyLlama-1.1B-step-50K-105b before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.", "## Want to compress other models?\n\n- Contact us and tell us which model to compress next here.\n- Request access to easily compress your own AI models here." ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0925 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2886 | 1.0 | 4380 | 1.1552 | | 0.9784 | 2.0 | 8760 | 1.0722 | | 0.8007 | 3.0 | 13140 | 1.0925 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "my_awesome_qa_model", "results": []}]}
Sourabh1407/my_awesome_qa_model
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "question-answering", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:12:41+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
my\_awesome\_qa\_model ====================== This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.0925 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-to-image
null
# Stable Diffusion v2-1-base Model Card This model was generated by Hugging Face using [Apple’s repository](https://github.com/apple/ml-stable-diffusion) which has [ASCL](https://github.com/apple/ml-stable-diffusion/blob/main/LICENSE.md). This version contains 4-bit linearly quantized Core ML weights for iOS 17 or macOS 14. To use weights without quantization, please visit [this model instead](https://huggingface.co/apple/coreml-stable-diffusion-2-1-base). This model card focuses on the model associated with the Stable Diffusion v2-1-base model. This `stable-diffusion-2-1-base` model fine-tunes [stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) (`512-base-ema.ckpt`) with 220k extra steps taken, with `punsafe=0.98` on the same dataset. These weights here have been converted to Core ML for use on Apple Silicon hardware. There are 4 variants of the Core ML weights: ``` coreml-stable-diffusion-2-1-base ├── original │ ├── compiled # Swift inference, "original" attention │ └── packages # Python inference, "original" attention └── split_einsum ├── compiled # Swift inference, "split_einsum" attention └── packages # Python inference, "split_einsum" attention ``` There are also two zip archives suitable for use in the [Hugging Face demo app](https://github.com/huggingface/swift-coreml-diffusers) and other third party tools: - `coreml-stable-diffusion-2-1-base-palettized_original_compiled.zip` contains the compiled, 6-bit model with `ORIGINAL` attention implementation. - `coreml-stable-diffusion-2-1-base-palettized_split_einsum_v2_compiled.zip` contains the compiled, 6-bit model with `SPLIT_EINSUM_V2` attention implementation. Please, refer to https://huggingface.co/blog/diffusers-coreml for details. - Use it with 🧨 [`diffusers`](https://huggingface.co/stabilityai/stable-diffusion-2-1-base#examples) - Use it with the [`stablediffusion`](https://github.com/Stability-AI/stablediffusion) repository: download the `v2-1_512-ema-pruned.ckpt` [here](https://huggingface.co/stabilityai/stable-diffusion-2-1-base/resolve/main/v2-1_512-ema-pruned.ckpt). ## Model Details - **Developed by:** Robin Rombach, Patrick Esser - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL) - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)). - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/). - **Cite as:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *This model was quantized by Vishnou Vinayagame and adapted from the original by Pedro Cuenca, itself adapted from Robin Rombach, Patrick Esser and David Ha *This model card was adapted by Pedro Cuenca from the original written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
{"license": "openrail++", "tags": ["stable-diffusion", "text-to-image", "core-ml"]}
Vishnou/coreml-stable-diffusion-2-1-base-4-bit
null
[ "coreml", "stable-diffusion", "text-to-image", "core-ml", "arxiv:2112.10752", "license:openrail++", "region:us" ]
null
2024-04-12T16:14:39+00:00
[ "2112.10752" ]
[]
TAGS #coreml #stable-diffusion #text-to-image #core-ml #arxiv-2112.10752 #license-openrail++ #region-us
# Stable Diffusion v2-1-base Model Card This model was generated by Hugging Face using Apple’s repository which has ASCL. This version contains 4-bit linearly quantized Core ML weights for iOS 17 or macOS 14. To use weights without quantization, please visit this model instead. This model card focuses on the model associated with the Stable Diffusion v2-1-base model. This 'stable-diffusion-2-1-base' model fine-tunes stable-diffusion-2-base ('URL') with 220k extra steps taken, with 'punsafe=0.98' on the same dataset. These weights here have been converted to Core ML for use on Apple Silicon hardware. There are 4 variants of the Core ML weights: There are also two zip archives suitable for use in the Hugging Face demo app and other third party tools: - 'coreml-stable-diffusion-2-1-base-palettized_original_compiled.zip' contains the compiled, 6-bit model with 'ORIGINAL' attention implementation. - 'coreml-stable-diffusion-2-1-base-palettized_split_einsum_v2_compiled.zip' contains the compiled, 6-bit model with 'SPLIT_EINSUM_V2' attention implementation. Please, refer to URL for details. - Use it with 'diffusers' - Use it with the 'stablediffusion' repository: download the 'v2-1_512-URL' here. ## Model Details - Developed by: Robin Rombach, Patrick Esser - Model type: Diffusion-based text-to-image generation model - Language(s): English - License: CreativeML Open RAIL++-M License - Model Description: This is a model that can be used to generate and modify images based on text prompts. It is a Latent Diffusion Model that uses a fixed, pretrained text encoder (OpenCLIP-ViT/H). - Resources for more information: GitHub Repository. - Cite as: @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *This model was quantized by Vishnou Vinayagame and adapted from the original by Pedro Cuenca, itself adapted from Robin Rombach, Patrick Esser and David Ha *This model card was adapted by Pedro Cuenca from the original written by: Robin Rombach, Patrick Esser and David Ha and is based on the Stable Diffusion v1 and DALL-E Mini model card.*
[ "# Stable Diffusion v2-1-base Model Card\n\nThis model was generated by Hugging Face using Apple’s repository which has ASCL. This version contains 4-bit linearly quantized Core ML weights for iOS 17 or macOS 14. To use weights without quantization, please visit this model instead.\n\nThis model card focuses on the model associated with the Stable Diffusion v2-1-base model.\n\nThis 'stable-diffusion-2-1-base' model fine-tunes stable-diffusion-2-base ('URL') with 220k extra steps taken, with 'punsafe=0.98' on the same dataset. \n\nThese weights here have been converted to Core ML for use on Apple Silicon hardware.\n\nThere are 4 variants of the Core ML weights:\n\n\n\nThere are also two zip archives suitable for use in the Hugging Face demo app and other third party tools:\n\n- 'coreml-stable-diffusion-2-1-base-palettized_original_compiled.zip' contains the compiled, 6-bit model with 'ORIGINAL' attention implementation.\n- 'coreml-stable-diffusion-2-1-base-palettized_split_einsum_v2_compiled.zip' contains the compiled, 6-bit model with 'SPLIT_EINSUM_V2' attention implementation.\n\nPlease, refer to URL for details.\n\n- Use it with 'diffusers'\n- Use it with the 'stablediffusion' repository: download the 'v2-1_512-URL' here.", "## Model Details\n- Developed by: Robin Rombach, Patrick Esser\n- Model type: Diffusion-based text-to-image generation model\n- Language(s): English\n- License: CreativeML Open RAIL++-M License\n- Model Description: This is a model that can be used to generate and modify images based on text prompts. It is a Latent Diffusion Model that uses a fixed, pretrained text encoder (OpenCLIP-ViT/H).\n- Resources for more information: GitHub Repository.\n- Cite as:\n\n @InProceedings{Rombach_2022_CVPR,\n author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\\\"orn},\n title = {High-Resolution Image Synthesis With Latent Diffusion Models},\n booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n month = {June},\n year = {2022},\n pages = {10684-10695}\n }\n\n*This model was quantized by Vishnou Vinayagame and adapted from the original by Pedro Cuenca, itself adapted from Robin Rombach, Patrick Esser and David Ha \n*This model card was adapted by Pedro Cuenca from the original written by: Robin Rombach, Patrick Esser and David Ha and is based on the Stable Diffusion v1 and DALL-E Mini model card.*" ]
[ "TAGS\n#coreml #stable-diffusion #text-to-image #core-ml #arxiv-2112.10752 #license-openrail++ #region-us \n", "# Stable Diffusion v2-1-base Model Card\n\nThis model was generated by Hugging Face using Apple’s repository which has ASCL. This version contains 4-bit linearly quantized Core ML weights for iOS 17 or macOS 14. To use weights without quantization, please visit this model instead.\n\nThis model card focuses on the model associated with the Stable Diffusion v2-1-base model.\n\nThis 'stable-diffusion-2-1-base' model fine-tunes stable-diffusion-2-base ('URL') with 220k extra steps taken, with 'punsafe=0.98' on the same dataset. \n\nThese weights here have been converted to Core ML for use on Apple Silicon hardware.\n\nThere are 4 variants of the Core ML weights:\n\n\n\nThere are also two zip archives suitable for use in the Hugging Face demo app and other third party tools:\n\n- 'coreml-stable-diffusion-2-1-base-palettized_original_compiled.zip' contains the compiled, 6-bit model with 'ORIGINAL' attention implementation.\n- 'coreml-stable-diffusion-2-1-base-palettized_split_einsum_v2_compiled.zip' contains the compiled, 6-bit model with 'SPLIT_EINSUM_V2' attention implementation.\n\nPlease, refer to URL for details.\n\n- Use it with 'diffusers'\n- Use it with the 'stablediffusion' repository: download the 'v2-1_512-URL' here.", "## Model Details\n- Developed by: Robin Rombach, Patrick Esser\n- Model type: Diffusion-based text-to-image generation model\n- Language(s): English\n- License: CreativeML Open RAIL++-M License\n- Model Description: This is a model that can be used to generate and modify images based on text prompts. It is a Latent Diffusion Model that uses a fixed, pretrained text encoder (OpenCLIP-ViT/H).\n- Resources for more information: GitHub Repository.\n- Cite as:\n\n @InProceedings{Rombach_2022_CVPR,\n author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\\\"orn},\n title = {High-Resolution Image Synthesis With Latent Diffusion Models},\n booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},\n month = {June},\n year = {2022},\n pages = {10684-10695}\n }\n\n*This model was quantized by Vishnou Vinayagame and adapted from the original by Pedro Cuenca, itself adapted from Robin Rombach, Patrick Esser and David Ha \n*This model card was adapted by Pedro Cuenca from the original written by: Robin Rombach, Patrick Esser and David Ha and is based on the Stable Diffusion v1 and DALL-E Mini model card.*" ]
null
null
### [Domain Sensitive Fine-tuning](https://github.com/mykelismyname/MSLM): Model is built by training BERT on a biomedical dataset BC2GM using an approach that learns mask specific losses. More details in paper below, #### Citation ``` @article{abaho2024improving, title={Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER}, author={Abaho, Micheal and Bollegala, Danushka and Leeming, Gary and Joyce, Dan and Buchan, Iain E}, journal={arXiv preprint arXiv:2403.18025}, year={2024} }
{}
LivNLP/MSLM_BioBERT
null
[ "region:us" ]
null
2024-04-12T16:17:06+00:00
[]
[]
TAGS #region-us
### Domain Sensitive Fine-tuning: Model is built by training BERT on a biomedical dataset BC2GM using an approach that learns mask specific losses. More details in paper below, ''' @article{abaho2024improving, title={Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER}, author={Abaho, Micheal and Bollegala, Danushka and Leeming, Gary and Joyce, Dan and Buchan, Iain E}, journal={arXiv preprint arXiv:2403.18025}, year={2024} }
[ "### Domain Sensitive Fine-tuning: \r\n\r\nModel is built by training BERT on a biomedical dataset BC2GM using an approach that learns mask specific losses.\r\nMore details in paper below,\r\n\r\n'''\r\n@article{abaho2024improving,\r\n title={Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER},\r\n author={Abaho, Micheal and Bollegala, Danushka and Leeming, Gary and Joyce, Dan and Buchan, Iain E},\r\n journal={arXiv preprint arXiv:2403.18025},\r\n year={2024}\r\n}" ]
[ "TAGS\n#region-us \n", "### Domain Sensitive Fine-tuning: \r\n\r\nModel is built by training BERT on a biomedical dataset BC2GM using an approach that learns mask specific losses.\r\nMore details in paper below,\r\n\r\n'''\r\n@article{abaho2024improving,\r\n title={Improving Pre-trained Language Model Sensitivity via Mask Specific losses: A case study on Biomedical NER},\r\n author={Abaho, Micheal and Bollegala, Danushka and Leeming, Gary and Joyce, Dan and Buchan, Iain E},\r\n journal={arXiv preprint arXiv:2403.18025},\r\n year={2024}\r\n}" ]
text-generation
transformers
# Ognoexperiment27multi_verse_modelNeuralsynthesis-7B Ognoexperiment27multi_verse_modelNeuralsynthesis-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [Kukedlc/NeuralSynthesis-7B-v0.1](https://huggingface.co/Kukedlc/NeuralSynthesis-7B-v0.1) ## 🧩 Configuration ```yaml models: - model: automerger/Ognoexperiment27Multi_verse_model-7B # No parameters necessary for base model - model: Kukedlc/NeuralSynthesis-7B-v0.1 parameters: density: 0.53 weight: 0.6 merge_method: dare_ties base_model: automerger/Ognoexperiment27Multi_verse_model-7B parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"], "base_model": ["Kukedlc/NeuralSynthesis-7B-v0.1"]}
automerger/Ognoexperiment27multi_verse_modelNeuralsynthesis-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "automerger", "base_model:Kukedlc/NeuralSynthesis-7B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T16:19:15+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #base_model-Kukedlc/NeuralSynthesis-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Ognoexperiment27multi_verse_modelNeuralsynthesis-7B Ognoexperiment27multi_verse_modelNeuralsynthesis-7B is an automated merge created by Maxime Labonne using the following configuration. * Kukedlc/NeuralSynthesis-7B-v0.1 ## Configuration ## Usage
[ "# Ognoexperiment27multi_verse_modelNeuralsynthesis-7B\n\nOgnoexperiment27multi_verse_modelNeuralsynthesis-7B is an automated merge created by Maxime Labonne using the following configuration.\n* Kukedlc/NeuralSynthesis-7B-v0.1", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #base_model-Kukedlc/NeuralSynthesis-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Ognoexperiment27multi_verse_modelNeuralsynthesis-7B\n\nOgnoexperiment27multi_verse_modelNeuralsynthesis-7B is an automated merge created by Maxime Labonne using the following configuration.\n* Kukedlc/NeuralSynthesis-7B-v0.1", "## Configuration", "## Usage" ]
video-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-kinetics-finetuned-data-no-yolo-colab-greedy This model is a fine-tuned version of [MCG-NJU/videomae-base-finetuned-kinetics](https://huggingface.co/MCG-NJU/videomae-base-finetuned-kinetics) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4309 - Accuracy: 0.9183 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 561 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0121 | 0.34 | 188 | 0.7186 | 0.8583 | | 0.0003 | 1.34 | 376 | 0.4848 | 0.9046 | | 0.0003 | 2.33 | 561 | 0.4309 | 0.9183 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "MCG-NJU/videomae-base-finetuned-kinetics", "model-index": [{"name": "videomae-base-finetuned-kinetics-finetuned-data-no-yolo-colab-greedy", "results": []}]}
bluebird089/videomae-base-finetuned-kinetics-finetuned-data-no-yolo-colab-greedy
null
[ "transformers", "tensorboard", "safetensors", "videomae", "video-classification", "generated_from_trainer", "base_model:MCG-NJU/videomae-base-finetuned-kinetics", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:19:21+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base-finetuned-kinetics #license-cc-by-nc-4.0 #endpoints_compatible #region-us
videomae-base-finetuned-kinetics-finetuned-data-no-yolo-colab-greedy ==================================================================== This model is a fine-tuned version of MCG-NJU/videomae-base-finetuned-kinetics on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.4309 * Accuracy: 0.9183 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * training\_steps: 561 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 561", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base-finetuned-kinetics #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 561", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
Source repo: https://github.com/rlsn/LungNoduleDetection
{"license": "mit"}
rlsn/DeTr4LungNodule
null
[ "transformers", "pytorch", "vit", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:27:16+00:00
[]
[]
TAGS #transformers #pytorch #vit #license-mit #endpoints_compatible #region-us
Source repo: URL
[]
[ "TAGS\n#transformers #pytorch #vit #license-mit #endpoints_compatible #region-us \n" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-large-samsum-2 This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.4648 - Rouge1: 0.4729 - Rouge2: 0.2361 - Rougel: 0.3953 - Rougelsum: 0.3947 - Gen Len: 18.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 460 | 1.5889 | 0.4523 | 0.2142 | 0.3714 | 0.3708 | 18.0 | | 2.2048 | 2.0 | 921 | 1.5293 | 0.4642 | 0.231 | 0.3875 | 0.3871 | 18.0 | | 1.765 | 3.0 | 1381 | 1.4971 | 0.4662 | 0.2268 | 0.3864 | 0.3857 | 18.0 | | 1.7019 | 4.0 | 1842 | 1.4893 | 0.471 | 0.2337 | 0.3934 | 0.3925 | 18.0 | | 1.6734 | 5.0 | 2302 | 1.4844 | 0.4725 | 0.2338 | 0.3945 | 0.3937 | 18.0 | | 1.6536 | 6.0 | 2763 | 1.4707 | 0.4717 | 0.2341 | 0.3935 | 0.3928 | 18.0 | | 1.6493 | 7.0 | 3223 | 1.4746 | 0.4736 | 0.2357 | 0.3956 | 0.3947 | 18.0 | | 1.6363 | 8.0 | 3684 | 1.4688 | 0.4731 | 0.2344 | 0.3937 | 0.393 | 18.0 | | 1.6337 | 9.0 | 4144 | 1.4658 | 0.4725 | 0.2345 | 0.3937 | 0.393 | 18.0 | | 1.6283 | 9.99 | 4600 | 1.4648 | 0.4729 | 0.2361 | 0.3953 | 0.3947 | 18.0 | ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "metrics": ["rouge"], "base_model": "facebook/bart-large", "model-index": [{"name": "bart-large-samsum-2", "results": []}]}
marcelomoreno26/bart-large-samsum-adapter
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:facebook/bart-large", "license:apache-2.0", "has_space", "region:us" ]
null
2024-04-12T16:28:01+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-facebook/bart-large #license-apache-2.0 #has_space #region-us
bart-large-samsum-2 =================== This model is a fine-tuned version of facebook/bart-large on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.4648 * Rouge1: 0.4729 * Rouge2: 0.2361 * Rougel: 0.3953 * Rougelsum: 0.3947 * Gen Len: 18.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * PEFT 0.10.1.dev0 * Transformers 4.39.3 * Pytorch 2.2.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-facebook/bart-large #license-apache-2.0 #has_space #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jab11769/my-awesome-bert-model
null
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:28:14+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Eliorkalfon/breakmymind-7B-slerp <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/breakmymind-7B-slerp-GGUF/resolve/main/breakmymind-7B-slerp.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/breakmymind-7B-slerp-GGUF/resolve/main/breakmymind-7B-slerp.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/breakmymind-7B-slerp-GGUF/resolve/main/breakmymind-7B-slerp.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/breakmymind-7B-slerp-GGUF/resolve/main/breakmymind-7B-slerp.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/breakmymind-7B-slerp-GGUF/resolve/main/breakmymind-7B-slerp.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/breakmymind-7B-slerp-GGUF/resolve/main/breakmymind-7B-slerp.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/breakmymind-7B-slerp-GGUF/resolve/main/breakmymind-7B-slerp.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/breakmymind-7B-slerp-GGUF/resolve/main/breakmymind-7B-slerp.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/breakmymind-7B-slerp-GGUF/resolve/main/breakmymind-7B-slerp.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/breakmymind-7B-slerp-GGUF/resolve/main/breakmymind-7B-slerp.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/breakmymind-7B-slerp-GGUF/resolve/main/breakmymind-7B-slerp.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/breakmymind-7B-slerp-GGUF/resolve/main/breakmymind-7B-slerp.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/breakmymind-7B-slerp-GGUF/resolve/main/breakmymind-7B-slerp.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/breakmymind-7B-slerp-GGUF/resolve/main/breakmymind-7B-slerp.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["merge", "mergekit", "lazymergekit", "WizardLM/WizardMath-7B-V1.1", "OpenPipe/mistral-ft-optimized-1227"], "base_model": "Eliorkalfon/breakmymind-7B-slerp", "quantized_by": "mradermacher"}
mradermacher/breakmymind-7B-slerp-GGUF
null
[ "transformers", "gguf", "merge", "mergekit", "lazymergekit", "WizardLM/WizardMath-7B-V1.1", "OpenPipe/mistral-ft-optimized-1227", "en", "base_model:Eliorkalfon/breakmymind-7B-slerp", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:29:37+00:00
[]
[ "en" ]
TAGS #transformers #gguf #merge #mergekit #lazymergekit #WizardLM/WizardMath-7B-V1.1 #OpenPipe/mistral-ft-optimized-1227 #en #base_model-Eliorkalfon/breakmymind-7B-slerp #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #merge #mergekit #lazymergekit #WizardLM/WizardMath-7B-V1.1 #OpenPipe/mistral-ft-optimized-1227 #en #base_model-Eliorkalfon/breakmymind-7B-slerp #license-apache-2.0 #endpoints_compatible #region-us \n" ]
null
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mit-b0-finetuned-sidewalk-semantic This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3201 - Mean Iou: 0.3806 - Mean Accuracy: 0.4846 - Overall Accuracy: 0.6943 - Accuracy Background: nan - Accuracy Hat: 0.0 - Accuracy Hair: 0.8309 - Accuracy Sunglasses: 0.0 - Accuracy Upper-clothes: 0.8803 - Accuracy Skirt: 0.5781 - Accuracy Pants: 0.8338 - Accuracy Dress: 0.4711 - Accuracy Belt: 0.0 - Accuracy Left-shoe: 0.1599 - Accuracy Right-shoe: 0.3381 - Accuracy Face: 0.8563 - Accuracy Left-leg: 0.7194 - Accuracy Right-leg: 0.7205 - Accuracy Left-arm: 0.6508 - Accuracy Right-arm: 0.6578 - Accuracy Bag: 0.5406 - Accuracy Scarf: 0.0 - Iou Background: 0.0 - Iou Hat: 0.0 - Iou Hair: 0.7122 - Iou Sunglasses: 0.0 - Iou Upper-clothes: 0.6504 - Iou Skirt: 0.4790 - Iou Pants: 0.6587 - Iou Dress: 0.3859 - Iou Belt: 0.0 - Iou Left-shoe: 0.1507 - Iou Right-shoe: 0.2691 - Iou Face: 0.7173 - Iou Left-leg: 0.5748 - Iou Right-leg: 0.5947 - Iou Left-arm: 0.5816 - Iou Right-arm: 0.5871 - Iou Bag: 0.4893 - Iou Scarf: 0.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Hat | Accuracy Hair | Accuracy Sunglasses | Accuracy Upper-clothes | Accuracy Skirt | Accuracy Pants | Accuracy Dress | Accuracy Belt | Accuracy Left-shoe | Accuracy Right-shoe | Accuracy Face | Accuracy Left-leg | Accuracy Right-leg | Accuracy Left-arm | Accuracy Right-arm | Accuracy Bag | Accuracy Scarf | Iou Background | Iou Hat | Iou Hair | Iou Sunglasses | Iou Upper-clothes | Iou Skirt | Iou Pants | Iou Dress | Iou Belt | Iou Left-shoe | Iou Right-shoe | Iou Face | Iou Left-leg | Iou Right-leg | Iou Left-arm | Iou Right-arm | Iou Bag | Iou Scarf | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------------:|:------------:|:-------------:|:-------------------:|:----------------------:|:--------------:|:--------------:|:--------------:|:-------------:|:------------------:|:-------------------:|:-------------:|:-----------------:|:------------------:|:-----------------:|:------------------:|:------------:|:--------------:|:--------------:|:-------:|:--------:|:--------------:|:-----------------:|:---------:|:---------:|:---------:|:--------:|:-------------:|:--------------:|:--------:|:------------:|:-------------:|:------------:|:-------------:|:-------:|:---------:| | 1.5584 | 1.0 | 100 | 1.4751 | 0.1357 | 0.2382 | 0.4526 | nan | 0.0 | 0.8771 | 0.0 | 0.8883 | 0.0443 | 0.7221 | 0.0035 | 0.0 | 0.0187 | 0.0055 | 0.2572 | 0.5884 | 0.5612 | 0.0822 | 0.0013 | 0.0 | 0.0 | 0.0 | 0.0 | 0.5636 | 0.0 | 0.3813 | 0.0433 | 0.3814 | 0.0035 | 0.0 | 0.0182 | 0.0055 | 0.2523 | 0.3602 | 0.3582 | 0.0746 | 0.0013 | 0.0 | 0.0 | | 1.1073 | 2.0 | 200 | 1.0997 | 0.2194 | 0.3308 | 0.5583 | nan | 0.0 | 0.9122 | 0.0 | 0.8933 | 0.5007 | 0.6982 | 0.1416 | 0.0 | 0.0076 | 0.0436 | 0.7573 | 0.6194 | 0.7115 | 0.2770 | 0.0608 | 0.0012 | 0.0 | 0.0 | 0.0 | 0.6610 | 0.0 | 0.4693 | 0.3429 | 0.5229 | 0.1246 | 0.0 | 0.0076 | 0.0416 | 0.6491 | 0.4038 | 0.4308 | 0.2338 | 0.0605 | 0.0012 | 0.0 | | 0.805 | 3.0 | 300 | 0.7604 | 0.2466 | 0.3515 | 0.5861 | nan | 0.0 | 0.8500 | 0.0 | 0.8839 | 0.4934 | 0.8517 | 0.2381 | 0.0 | 0.0038 | 0.0406 | 0.8209 | 0.5776 | 0.7025 | 0.2485 | 0.2341 | 0.0298 | 0.0 | 0.0 | 0.0 | 0.6900 | 0.0 | 0.5378 | 0.3542 | 0.5424 | 0.2035 | 0.0 | 0.0038 | 0.0391 | 0.6827 | 0.4027 | 0.4848 | 0.2384 | 0.2289 | 0.0296 | 0.0 | | 0.604 | 4.0 | 400 | 0.5498 | 0.2906 | 0.3944 | 0.6189 | nan | 0.0 | 0.8108 | 0.0 | 0.8788 | 0.6810 | 0.7835 | 0.2571 | 0.0 | 0.0016 | 0.1009 | 0.8612 | 0.6496 | 0.6929 | 0.4317 | 0.4043 | 0.1522 | 0.0 | 0.0 | 0.0 | 0.6910 | 0.0 | 0.5894 | 0.4338 | 0.6222 | 0.2234 | 0.0 | 0.0016 | 0.0918 | 0.6875 | 0.4402 | 0.5096 | 0.4032 | 0.3877 | 0.1492 | 0.0 | | 0.4334 | 5.0 | 500 | 0.4440 | 0.3219 | 0.4196 | 0.6428 | nan | 0.0 | 0.8265 | 0.0 | 0.8612 | 0.4725 | 0.8254 | 0.4861 | 0.0 | 0.0033 | 0.1673 | 0.8410 | 0.6689 | 0.6548 | 0.5207 | 0.5088 | 0.2962 | 0.0 | 0.0 | 0.0 | 0.6959 | 0.0 | 0.6233 | 0.3809 | 0.6130 | 0.3510 | 0.0 | 0.0033 | 0.1437 | 0.7028 | 0.4987 | 0.5323 | 0.4820 | 0.4809 | 0.2858 | 0.0 | | 0.4213 | 6.0 | 600 | 0.3817 | 0.3491 | 0.4549 | 0.6658 | nan | 0.0 | 0.8247 | 0.0 | 0.8762 | 0.7055 | 0.7855 | 0.3145 | 0.0 | 0.0273 | 0.2536 | 0.8611 | 0.6931 | 0.7257 | 0.6254 | 0.6281 | 0.4132 | 0.0 | 0.0 | 0.0 | 0.7044 | 0.0 | 0.6379 | 0.4727 | 0.6504 | 0.2752 | 0.0 | 0.0272 | 0.2056 | 0.7066 | 0.5298 | 0.5651 | 0.5557 | 0.5634 | 0.3902 | 0.0 | | 0.3325 | 7.0 | 700 | 0.3484 | 0.3690 | 0.4758 | 0.6840 | nan | 0.0 | 0.8352 | 0.0 | 0.8333 | 0.6651 | 0.8321 | 0.4643 | 0.0 | 0.0780 | 0.3248 | 0.8554 | 0.6926 | 0.7224 | 0.6461 | 0.6486 | 0.4906 | 0.0 | 0.0 | 0.0 | 0.7079 | 0.0 | 0.6573 | 0.4848 | 0.6432 | 0.3743 | 0.0 | 0.0765 | 0.2516 | 0.7128 | 0.5528 | 0.5816 | 0.5693 | 0.5773 | 0.4521 | 0.0 | | 0.2556 | 8.0 | 800 | 0.3384 | 0.3795 | 0.4845 | 0.6971 | nan | 0.0 | 0.8404 | 0.0 | 0.8723 | 0.6558 | 0.8311 | 0.4614 | 0.0 | 0.1270 | 0.3250 | 0.8533 | 0.6978 | 0.7209 | 0.6525 | 0.6619 | 0.5364 | 0.0 | 0.0 | 0.0 | 0.7130 | 0.0 | 0.6572 | 0.5012 | 0.6634 | 0.3790 | 0.0 | 0.1220 | 0.2599 | 0.7153 | 0.5627 | 0.5908 | 0.5849 | 0.5933 | 0.4873 | 0.0 | | 0.3337 | 9.0 | 900 | 0.3201 | 0.3806 | 0.4846 | 0.6943 | nan | 0.0 | 0.8309 | 0.0 | 0.8803 | 0.5781 | 0.8338 | 0.4711 | 0.0 | 0.1599 | 0.3381 | 0.8563 | 0.7194 | 0.7205 | 0.6508 | 0.6578 | 0.5406 | 0.0 | 0.0 | 0.0 | 0.7122 | 0.0 | 0.6504 | 0.4790 | 0.6587 | 0.3859 | 0.0 | 0.1507 | 0.2691 | 0.7173 | 0.5748 | 0.5947 | 0.5816 | 0.5871 | 0.4893 | 0.0 | | 0.2843 | 10.0 | 1000 | 0.3204 | 0.3879 | 0.4943 | 0.7036 | nan | 0.0 | 0.8304 | 0.0 | 0.8535 | 0.6956 | 0.8303 | 0.4990 | 0.0 | 0.1708 | 0.3445 | 0.8594 | 0.7149 | 0.7322 | 0.6598 | 0.6786 | 0.5344 | 0.0 | 0.0 | 0.0 | 0.7126 | 0.0 | 0.6681 | 0.5240 | 0.6700 | 0.4029 | 0.0 | 0.1600 | 0.2739 | 0.7169 | 0.5757 | 0.6008 | 0.5868 | 0.6012 | 0.4902 | 0.0 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "other", "tags": ["generated_from_trainer"], "base_model": "nvidia/mit-b0", "model-index": [{"name": "mit-b0-finetuned-sidewalk-semantic", "results": []}]}
raks87/mit-b0-finetuned-sidewalk-semantic
null
[ "transformers", "tensorboard", "safetensors", "segformer", "generated_from_trainer", "base_model:nvidia/mit-b0", "license:other", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:30:10+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #segformer #generated_from_trainer #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us
mit-b0-finetuned-sidewalk-semantic ================================== This model is a fine-tuned version of nvidia/mit-b0 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.3201 * Mean Iou: 0.3806 * Mean Accuracy: 0.4846 * Overall Accuracy: 0.6943 * Accuracy Background: nan * Accuracy Hat: 0.0 * Accuracy Hair: 0.8309 * Accuracy Sunglasses: 0.0 * Accuracy Upper-clothes: 0.8803 * Accuracy Skirt: 0.5781 * Accuracy Pants: 0.8338 * Accuracy Dress: 0.4711 * Accuracy Belt: 0.0 * Accuracy Left-shoe: 0.1599 * Accuracy Right-shoe: 0.3381 * Accuracy Face: 0.8563 * Accuracy Left-leg: 0.7194 * Accuracy Right-leg: 0.7205 * Accuracy Left-arm: 0.6508 * Accuracy Right-arm: 0.6578 * Accuracy Bag: 0.5406 * Accuracy Scarf: 0.0 * Iou Background: 0.0 * Iou Hat: 0.0 * Iou Hair: 0.7122 * Iou Sunglasses: 0.0 * Iou Upper-clothes: 0.6504 * Iou Skirt: 0.4790 * Iou Pants: 0.6587 * Iou Dress: 0.3859 * Iou Belt: 0.0 * Iou Left-shoe: 0.1507 * Iou Right-shoe: 0.2691 * Iou Face: 0.7173 * Iou Left-leg: 0.5748 * Iou Right-leg: 0.5947 * Iou Left-arm: 0.5816 * Iou Right-arm: 0.5871 * Iou Bag: 0.4893 * Iou Scarf: 0.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 2 * eval\_batch\_size: 2 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 8 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #segformer #generated_from_trainer #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 8\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
simonamdev/openai-whisper-small-jv-PeftType.LORA
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:31:52+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: girayo/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
girayo/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-04-12T16:32:20+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: girayo/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: girayo/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: girayo/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
oneandahalfcats/twoandahalfcats
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T16:32:41+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
jhovany/robertuito-hate-speech-94-6-Homomex2024
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:32:44+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
# Introduction This ECLIPSE model weight is a tiny (33M parameter) non-diffusion text-to-image prior model **trained on CC12M data**. Despite being so small and trained on a limited amount of data, ECLIPSE priors achieve results that of 1 Billion parameter T2I prior models trained on millions of image-text pairs. - **Project Page:** [https://eclipse-t2i.vercel.app](https://eclipse-t2i.vercel.app) - **GitHub:** [https://github.com/eclipse-t2i/eclipse-inference](https://github.com/eclipse-t2i/eclipse-inference) ## Evaluations ![Qualitative Examples](./assets/example.png) ![Results](./assets/results.png) ## Installation ```bash git clone [email protected]:eclipse-t2i/eclipse-inference.git conda create -p ./venv python=3.9 pip install -r requirements.txt ``` ## Run Inference This repository supports two pre-trained image decoders: [Karlo-v1-alpha](https://huggingface.co/kakaobrain/karlo-v1-alpha) and [Kandinsky-v2.2](https://huggingface.co/kandinsky-community/kandinsky-2-2-decoder). Note: ECLIPSE prior is not a diffusion model -- while image decoders are. ### Karlo Inference ```python from src.pipelines.pipeline_unclip import UnCLIPPipeline from src.priors.prior_transformer import PriorTransformer prior = PriorTransformer.from_pretrained("ECLIPSE-Community/ECLIPSE_Karlo_Prior") pipe = UnCLIPPipeline.from_pretrained("kakaobrain/karlo-v1-alpha", prior=prior).to("cuda") prompt="black apples in the basket" images = pipe(prompt, decoder_guidance_scale=7.5).images images[0] ``` ### Kandinsky Inference ```python from src.pipelines.pipeline_kandinsky_prior import KandinskyPriorPipeline from src.priors.prior_transformer import PriorTransformer from diffusers import DiffusionPipeline prior = PriorTransformer.from_pretrained("ECLIPSE-Community/ECLIPSE_KandinskyV22_Prior") pipe_prior = KandinskyPriorPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-prior", prior=prior).to("cuda") pipe = DiffusionPipeline.from_pretrained("kandinsky-community/kandinsky-2-2-decoder").to("cuda") prompt = "black apples in the basket" image_embeds, negative_image_embeds = pipe_prior(prompt).to_tuple() images = pipe( num_inference_steps=50, image_embeds=image_embeds, negative_image_embeds=negative_image_embeds, ).images images[0] ``` ## Limitations The model is intended for research purposes only to show a way to reduce the unnecessary resource usage in existing T2I research. As this prior model is trained using very small LAION subset and CLIP supervision, it will observe the limitations from the CLIP model such as: * Lack of spatial understanding. * Cannot render legible text * Complex compositionality is still a big challenge that can be improved if CLIP is improved. * While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
{"language": ["en"], "license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "prior", "unclip", "kandinskyv2.2"]}
ECLIPSE-Community/ECLIPSE_Karlo_Prior
null
[ "diffusers", "safetensors", "text-to-image", "prior", "unclip", "kandinskyv2.2", "en", "license:openrail++", "region:us" ]
null
2024-04-12T16:33:52+00:00
[]
[ "en" ]
TAGS #diffusers #safetensors #text-to-image #prior #unclip #kandinskyv2.2 #en #license-openrail++ #region-us
# Introduction This ECLIPSE model weight is a tiny (33M parameter) non-diffusion text-to-image prior model trained on CC12M data. Despite being so small and trained on a limited amount of data, ECLIPSE priors achieve results that of 1 Billion parameter T2I prior models trained on millions of image-text pairs. - Project Page: URL - GitHub: URL ## Evaluations !Qualitative Examples !Results ## Installation ## Run Inference This repository supports two pre-trained image decoders: Karlo-v1-alpha and Kandinsky-v2.2. Note: ECLIPSE prior is not a diffusion model -- while image decoders are. ### Karlo Inference ### Kandinsky Inference ## Limitations The model is intended for research purposes only to show a way to reduce the unnecessary resource usage in existing T2I research. As this prior model is trained using very small LAION subset and CLIP supervision, it will observe the limitations from the CLIP model such as: * Lack of spatial understanding. * Cannot render legible text * Complex compositionality is still a big challenge that can be improved if CLIP is improved. * While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
[ "# Introduction\n\nThis ECLIPSE model weight is a tiny (33M parameter) non-diffusion text-to-image prior model trained on CC12M data.\n\nDespite being so small and trained on a limited amount of data, ECLIPSE priors achieve results that of 1 Billion parameter T2I prior models trained on millions of image-text pairs.\n\n- Project Page: URL\n- GitHub: URL", "## Evaluations\n\n!Qualitative Examples\n\n!Results", "## Installation", "## Run Inference\n\nThis repository supports two pre-trained image decoders: Karlo-v1-alpha and Kandinsky-v2.2.\nNote: ECLIPSE prior is not a diffusion model -- while image decoders are.", "### Karlo Inference", "### Kandinsky Inference", "## Limitations\n\nThe model is intended for research purposes only to show a way to reduce the unnecessary resource usage in existing T2I research.\n\nAs this prior model is trained using very small LAION subset and CLIP supervision, it will observe the limitations from the CLIP model such as: \n* Lack of spatial understanding.\n* Cannot render legible text\n* Complex compositionality is still a big challenge that can be improved if CLIP is improved.\n* While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases." ]
[ "TAGS\n#diffusers #safetensors #text-to-image #prior #unclip #kandinskyv2.2 #en #license-openrail++ #region-us \n", "# Introduction\n\nThis ECLIPSE model weight is a tiny (33M parameter) non-diffusion text-to-image prior model trained on CC12M data.\n\nDespite being so small and trained on a limited amount of data, ECLIPSE priors achieve results that of 1 Billion parameter T2I prior models trained on millions of image-text pairs.\n\n- Project Page: URL\n- GitHub: URL", "## Evaluations\n\n!Qualitative Examples\n\n!Results", "## Installation", "## Run Inference\n\nThis repository supports two pre-trained image decoders: Karlo-v1-alpha and Kandinsky-v2.2.\nNote: ECLIPSE prior is not a diffusion model -- while image decoders are.", "### Karlo Inference", "### Kandinsky Inference", "## Limitations\n\nThe model is intended for research purposes only to show a way to reduce the unnecessary resource usage in existing T2I research.\n\nAs this prior model is trained using very small LAION subset and CLIP supervision, it will observe the limitations from the CLIP model such as: \n* Lack of spatial understanding.\n* Cannot render legible text\n* Complex compositionality is still a big challenge that can be improved if CLIP is improved.\n* While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases." ]
text-generation
transformers
![SauerkrautLM](https://vago-solutions.ai/wp-content/uploads/2024/04/SauerkrautLM-Qwen-32b.png "SauerkrautLM-Qwen-32b") ## VAGO solutions SauerkrautLM-Qwen-32b Introducing **SauerkrautLM-Qwen-32b** – our Sauerkraut version of the powerful [Qwen/Qwen1.5-32B](https://huggingface.co/Qwen/Qwen1.5-32B)! The model **SauerkrautLM-Qwen-32b** is a **joint effort** between **VAGO solutions** and **Hyperspace.ai.** - Finetuned with **SFT** - Aligned with **DPO** # Table of Contents 1. [Overview of all SauerkrautLM-Qwen-32b](#all-SauerkrautLM-Qwen-32b) 2. [Model Details](#model-details) - [Prompt template](#prompt-template) - [Training procedure](#proceed-of-the-training) 3. [Evaluation](#evaluation) 5. [Disclaimer](#disclaimer) 6. [Contact](#contact) 7. [Collaborations](#collaborations) 8. [Acknowledgement](#acknowledgement) ## All SauerkrautLM-Qwen-32b | Model | HF | EXL2 | GGUF | AWQ | |-------|-------|-------|-------|-------| | SauerkrautLM-Qwen-32b | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-Qwen-32b) | coming soon | coming soon | coming soon | ## Model Details **SauerkrautLM-Qwen-32b** - **Model Type:** SauerkrautLM-Qwen-32b is a finetuned Model based on [Qwen/Qwen1.5-32B](https://huggingface.co/Qwen/Qwen1.5-32B) - **Language(s):** German, English - **License:** [tongyi-qianwen-research](https://huggingface.co/Qwen/Qwen1.5-32B/blob/main/LICENSEs) - **Contact:** [VAGO solutions](https://vago-solutions.ai), [Hyperspace.ai](https://hyperspace.computer/) ### Training procedure: - We trained this model for 2 epochs on 160k data samples with SFT. - Afterwards we applied DPO for 1 epoch with 110k data. - LaserRMT version coming soon **We teached German language skills on this model.** As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress). ### Prompt Template: **English:** ``` <|im_start|>system You are SauerkrautLM, a helpful AI assistant.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` **German:** ``` <|im_start|>system Du bist SauerkrautLM, ein hilfreicher und freundlicher KI-Assistent.<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ``` ### Example output of german language: ``` Die Karte zeigte eine verborgene Höhle, die in den umliegenden Bergen zu finden war. Mia war von der Idee fasziniert, diese Höhle zu entdecken und ihre Geheimnisse zu lüften. Sie nahm die Karte mit sich und machte sich auf den Weg, um die Höhle zu finden. Die Wanderung zu den Bergen war eine Herausforderung, aber Mia war fest entschlossen, ihr Abenteuer zu vollenden. Sie überwand steinige Wege und überquerte klirrende Bäche, die ihre Füße kühlten und ihr die Energie für den Rest des Weges gab. Endlich erreichte Mia die Höhle, die von einem dichten Wald umgeben war. Die Höhle war ein Ort der Geheimnisse und des Staunens, der ihr Herz höher schlagen ließ. Sie betrat die Höhle, und die Dunkelheit umhüllte sie wie ein Schleier aus Stille. In der Höhle fand Mia eine alte Schatzkiste, die mit einem alten, verwitterten Holz verziert war. Mit zitternden Händen öffnete sie die Schatzkiste und fand darin eine alte, zerfledderte Schriftrolle. Die Schriftrolle war ein geheimnisvolles Artefakt, das ihr die Geschichte der Höhle offenbarte. ``` ## Evaluation **Open LLM Leaderboard:** | Metric | Value | |-----------------------|---------------------------| | Avg. | **73.11** | | ARC (25-shot) | 59.22 | | HellaSwag (10-shot) | 82.32 | | MMLU (5-shot) | 74.40| | TruthfulQA (0-shot) | 61.03 | | Winogrande (5-shot) | 82.16 | | GSM8K (5-shot) | 79.53 | ## Disclaimer We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.   ## Contact If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.   ## Collaborations We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt), [Hyperspace.computer](https://hyperspace.computer/) ## Acknowledgement Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such valuable model to the Open-Source community
{"language": ["de", "en"], "license": "other", "tags": ["sft", "dpo"], "license_name": "tongyi-qianwen-research", "license_link": "https://huggingface.co/Qwen/Qwen1.5-32B/blob/main/LICENSE"}
VAGOsolutions/SauerkrautLM-Qwen-32b
null
[ "transformers", "safetensors", "qwen2", "text-generation", "sft", "dpo", "conversational", "de", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T16:33:53+00:00
[]
[ "de", "en" ]
TAGS #transformers #safetensors #qwen2 #text-generation #sft #dpo #conversational #de #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
!SauerkrautLM VAGO solutions SauerkrautLM-Qwen-32b ------------------------------------ Introducing SauerkrautLM-Qwen-32b – our Sauerkraut version of the powerful Qwen/Qwen1.5-32B! The model SauerkrautLM-Qwen-32b is a joint effort between VAGO solutions and URL. * Finetuned with SFT * Aligned with DPO Table of Contents ================= 1. Overview of all SauerkrautLM-Qwen-32b 2. Model Details * Prompt template * Training procedure 3. Evaluation 4. Disclaimer 5. Contact 6. Collaborations 7. Acknowledgement All SauerkrautLM-Qwen-32b ------------------------- Model Details ------------- SauerkrautLM-Qwen-32b * Model Type: SauerkrautLM-Qwen-32b is a finetuned Model based on Qwen/Qwen1.5-32B * Language(s): German, English * License: tongyi-qianwen-research * Contact: VAGO solutions, URL ### Training procedure: * We trained this model for 2 epochs on 160k data samples with SFT. * Afterwards we applied DPO for 1 epoch with 110k data. * LaserRMT version coming soon We teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress). ### Prompt Template: English: German: ### Example output of german language: Evaluation ---------- Open LLM Leaderboard: Disclaimer ---------- We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models. Contact ------- If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions. Collaborations -------------- We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer Acknowledgement --------------- Many thanks to Qwen for providing such valuable model to the Open-Source community
[ "### Training procedure:\n\n\n* We trained this model for 2 epochs on 160k data samples with SFT.\n* Afterwards we applied DPO for 1 epoch with 110k data.\n* LaserRMT version coming soon\n\n\nWe teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).", "### Prompt Template:\n\n\nEnglish:\n\n\nGerman:", "### Example output of german language:\n\n\nEvaluation\n----------\n\n\nOpen LLM Leaderboard:\n\n\n\nDisclaimer\n----------\n\n\nWe must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.\nHowever, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.\nAdditionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.\n\n\nContact\n-------\n\n\nIf you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.\n\n\nCollaborations\n--------------\n\n\nWe are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer\n\n\nAcknowledgement\n---------------\n\n\nMany thanks to Qwen for providing such valuable model to the Open-Source community" ]
[ "TAGS\n#transformers #safetensors #qwen2 #text-generation #sft #dpo #conversational #de #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training procedure:\n\n\n* We trained this model for 2 epochs on 160k data samples with SFT.\n* Afterwards we applied DPO for 1 epoch with 110k data.\n* LaserRMT version coming soon\n\n\nWe teached German language skills on this model. As far as we know, it is the first Qwen 32B model with bilingual skills in German and English. Nevertheless, formulations may occur that are not entirely correct (still work in progress).", "### Prompt Template:\n\n\nEnglish:\n\n\nGerman:", "### Example output of german language:\n\n\nEvaluation\n----------\n\n\nOpen LLM Leaderboard:\n\n\n\nDisclaimer\n----------\n\n\nWe must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.\nHowever, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.\nAdditionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.\n\n\nContact\n-------\n\n\nIf you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.\n\n\nCollaborations\n--------------\n\n\nWe are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer\n\n\nAcknowledgement\n---------------\n\n\nMany thanks to Qwen for providing such valuable model to the Open-Source community" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # llama-2-finetuned-qa-TOFU-dataset This model is a fine-tuned version of [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1874 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 12 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 218 | 1.2291 | | No log | 1.99 | 436 | 1.1874 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "NousResearch/Llama-2-7b-chat-hf", "model-index": [{"name": "llama-2-finetuned-qa-TOFU-dataset", "results": []}]}
AlyGreo/llama-2-finetuned-qa-TOFU-dataset
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
2024-04-12T16:35:55+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-NousResearch/Llama-2-7b-chat-hf #region-us
llama-2-finetuned-qa-TOFU-dataset ================================= This model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.1874 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 3 * total\_train\_batch\_size: 12 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 2 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 3\n* total\\_train\\_batch\\_size: 12\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-NousResearch/Llama-2-7b-chat-hf #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 3\n* total\\_train\\_batch\\_size: 12\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # albert-finetuned-squad This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.19.0 - Tokenizers 0.15.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "albert-base-v2", "model-index": [{"name": "albert-finetuned-squad", "results": []}]}
noushsuon/albert-finetuned-squad
null
[ "transformers", "safetensors", "albert", "question-answering", "generated_from_trainer", "base_model:albert-base-v2", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:36:14+00:00
[]
[]
TAGS #transformers #safetensors #albert #question-answering #generated_from_trainer #base_model-albert-base-v2 #license-apache-2.0 #endpoints_compatible #region-us
# albert-finetuned-squad This model is a fine-tuned version of albert-base-v2 on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.19.0 - Tokenizers 0.15.0
[ "# albert-finetuned-squad\n\nThis model is a fine-tuned version of albert-base-v2 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.1.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.0" ]
[ "TAGS\n#transformers #safetensors #albert #question-answering #generated_from_trainer #base_model-albert-base-v2 #license-apache-2.0 #endpoints_compatible #region-us \n", "# albert-finetuned-squad\n\nThis model is a fine-tuned version of albert-base-v2 on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.35.2\n- Pytorch 2.1.0+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.0" ]
sentence-similarity
sentence-transformers
# luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-1e-8-m-1-e-3 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-1e-8-m-1-e-3') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-1e-8-m-1-e-3) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 5885 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.TripletLoss.TripletLoss` with parameters: ``` {'distance_metric': 'TripletDistanceMetric.COSINE', 'triplet_margin': 1} ``` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 1e-08 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 2648, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-1e-8-m-1-e-3
null
[ "sentence-transformers", "safetensors", "mpnet", "feature-extraction", "sentence-similarity", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:36:32+00:00
[]
[]
TAGS #sentence-transformers #safetensors #mpnet #feature-extraction #sentence-similarity #endpoints_compatible #region-us
# luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-1e-8-m-1-e-3 This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 5885 with parameters: Loss: 'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters: Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors
[ "# luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-1e-8-m-1-e-3\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 5885 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #safetensors #mpnet #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n", "# luiz-and-robert-thesis/all-mpnet-base-newtriplets-v2-lr-1e-8-m-1-e-3\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 5885 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.TripletLoss.TripletLoss' with parameters:\n \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
text-generation
transformers
<img src="https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1/resolve/main/logo.png" alt="Zephyr 141B Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for Zephyr 141B-A35B Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 141B-A35B is the latest model in the series, and is a fine-tuned version of [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) that was trained using a novel alignment algorithm called [Odds Ratio Preference Optimization (ORPO)](https://huggingface.co/papers/2403.07691) with **7k instances** for **1.3 hours** on 4 nodes of 8 x H100s. ORPO does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO. To train Zephyr-141B-A35B, we used the [`argilla/distilabel-capybara-dpo-7k-binarized`](https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized) preference dataset, which consists of synthetic, high-quality, multi-turn preferences that have been scored via LLMs. > [!NOTE] > This model was trained collaboratively between Argilla, KAIST, and Hugging Face ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Model type:** A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets. - **Language(s) (NLP):** Primarily English. - **License:** Apache 2.0 - **Finetuned from model:** [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1) ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/huggingface/alignment-handbook - **Dataset:** https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized ## Performance Zephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [IFEval](https://arxiv.org/abs/2311.07911). The scores reported below were obtained using the [LightEval](https://github.com/huggingface/lighteval) evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. | Model | MT Bench | IFEval | BBH | AGIEval | |-----------------------------------------------------------------------------------------------------|---------:|-------:|------:|--------:| | [zephyr-orpo-141b-A35b-v0.1](https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1) | 8.17 | 65.06 | 58.96 | 44.16 | | [databricks/dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct) | 8.26 | 52.13 | 48.50 | 41.16 | | [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) | 8.30 | 55.08 | 45.31 | 47.68 | ## Intended uses & limitations The model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the `pipeline()` function from 🤗 Transformers: ```python # pip install 'transformers>=4.39.3' # pip install accelerate import torch from transformers import pipeline pipe = pipeline( "text-generation", model="HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1", device_map="auto", torch_dtype=torch.bfloat16, ) messages = [ { "role": "system", "content": "You are Zephyr, a helpful assistant.", }, {"role": "user", "content": "Explain how Mixture of Experts work in language a child would understand."}, ] outputs = pipe( messages, max_new_tokens=512, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, ) print(outputs[0]["generated_text"][-1]["content"]) ``` ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> Zephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model (`mistral-community/Mixtral-8x22B-v0.1`), however it is likely to have included a mix of Web data and technical sources like books and code. See the [Falcon 180B model card](https://huggingface.co/tiiuae/falcon-180B#training-data) for an example of this. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - total_train_batch_size: 32 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 ## Citation If you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper: ``` @misc{hong2024orpo, title={ORPO: Monolithic Preference Optimization without Reference Model}, author={Jiwoo Hong and Noah Lee and James Thorne}, year={2024}, eprint={2403.07691}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` You may also wish to cite the creators of this model: ``` @misc{zephyr_141b, author = {Alvaro Bartolome and Jiwoo Hong and Noah Lee and Kashif Rasul and Lewis Tunstall}, title = {Zephyr 141B A35B}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1}} } ```
{"license": "apache-2.0", "tags": ["trl", "orpo", "generated_from_trainer"], "datasets": ["argilla/distilabel-capybara-dpo-7k-binarized"], "base_model": "mistral-community/Mixtral-8x22B-v0.1", "model-index": [{"name": "zephyr-orpo-141b-A35b-v0.1", "results": []}]}
blockblockblock/zephyr-orpo-141b-A35b-v0.1-bpw3.7
null
[ "transformers", "safetensors", "mixtral", "text-generation", "trl", "orpo", "generated_from_trainer", "conversational", "dataset:argilla/distilabel-capybara-dpo-7k-binarized", "arxiv:2403.07691", "arxiv:2311.07911", "base_model:mistral-community/Mixtral-8x22B-v0.1", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T16:37:44+00:00
[ "2403.07691", "2311.07911" ]
[]
TAGS #transformers #safetensors #mixtral #text-generation #trl #orpo #generated_from_trainer #conversational #dataset-argilla/distilabel-capybara-dpo-7k-binarized #arxiv-2403.07691 #arxiv-2311.07911 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src="URL alt="Zephyr 141B Logo" width="400" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Model Card for Zephyr 141B-A35B =============================== Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr 141B-A35B is the latest model in the series, and is a fine-tuned version of mistral-community/Mixtral-8x22B-v0.1 that was trained using a novel alignment algorithm called Odds Ratio Preference Optimization (ORPO) with 7k instances for 1.3 hours on 4 nodes of 8 x H100s. ORPO does not require an SFT step to achieve high performance and is thus much more computationally efficient than methods like DPO and PPO. To train Zephyr-141B-A35B, we used the 'argilla/distilabel-capybara-dpo-7k-binarized' preference dataset, which consists of synthetic, high-quality, multi-turn preferences that have been scored via LLMs. > > [!NOTE] > This model was trained collaboratively between Argilla, KAIST, and Hugging Face > > > Model Details ------------- ### Model Description * Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets. * Language(s) (NLP): Primarily English. * License: Apache 2.0 * Finetuned from model: mistral-community/Mixtral-8x22B-v0.1 ### Model Sources * Repository: URL * Dataset: URL Performance ----------- Zephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard. Intended uses & limitations --------------------------- The model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers: Bias, Risks, and Limitations ---------------------------- Zephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). It is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this. Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-06 * train\_batch\_size: 1 * eval\_batch\_size: 8 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 32 * total\_train\_batch\_size: 32 * total\_eval\_batch\_size: 256 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: inverse\_sqrt * lr\_scheduler\_warmup\_steps: 100 * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.1 If you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper: You may also wish to cite the creators of this model:
[ "### Model Description\n\n\n* Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English.\n* License: Apache 2.0\n* Finetuned from model: mistral-community/Mixtral-8x22B-v0.1", "### Model Sources\n\n\n* Repository: URL\n* Dataset: URL\n\n\nPerformance\n-----------\n\n\nZephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nIt is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 32\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1\n\n\nIf you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:\n\n\nYou may also wish to cite the creators of this model:" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #trl #orpo #generated_from_trainer #conversational #dataset-argilla/distilabel-capybara-dpo-7k-binarized #arxiv-2403.07691 #arxiv-2311.07911 #base_model-mistral-community/Mixtral-8x22B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Model Description\n\n\n* Model type: A Mixture of Experts (MoE) model with 141B total parameters and 35B active parameters. Fine-tuned on a mix of publicly available, synthetic datasets.\n* Language(s) (NLP): Primarily English.\n* License: Apache 2.0\n* Finetuned from model: mistral-community/Mixtral-8x22B-v0.1", "### Model Sources\n\n\n* Repository: URL\n* Dataset: URL\n\n\nPerformance\n-----------\n\n\nZephyr 141B-A35B was trained to test the effectiveness of ORPO at scale and the underlying dataset contains a mix of general chat capabilities. It achieves strong performance on chat benchmarks like MT Bench and IFEval. The scores reported below were obtained using the LightEval evaluation suite and each prompt has been formatted with the model's corresponding chat template to simulate real-world usage. This is why some scores may differ from those reported in technical reports or on the Open LLM Leaderboard.\n\n\n\nIntended uses & limitations\n---------------------------\n\n\nThe model was fine-tuned on a blend of chat, code, math, and reasoning data. Here's how you can run the model using the 'pipeline()' function from Transformers:\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nZephyr 141B-A35B has not been aligned to human preferences for safety within the RLHF phase or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).\nIt is also unknown what the size and composition of the corpus was used to train the base model ('mistral-community/Mixtral-8x22B-v0.1'), however it is likely to have included a mix of Web data and technical sources like books and code. See the Falcon 180B model card for an example of this.\n\n\nTraining procedure\n------------------", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 32\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 256\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: inverse\\_sqrt\n* lr\\_scheduler\\_warmup\\_steps: 100\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.1\n\n\nIf you find Zephyr 141B-A35B is useful in your work, please cite the ORPO paper:\n\n\nYou may also wish to cite the creators of this model:" ]
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
LuisGon/Second_model
null
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:38:24+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
{"library_name": "peft"}
pankaj2609/llama-chat_finetune_full
null
[ "peft", "region:us" ]
null
2024-04-12T16:38:41+00:00
[]
[]
TAGS #peft #region-us
## Training procedure The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float16 ### Framework versions - PEFT 0.4.0 - PEFT 0.4.0
[ "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n- PEFT 0.4.0\n\n- PEFT 0.4.0" ]
[ "TAGS\n#peft #region-us \n", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: False\n- bnb_4bit_compute_dtype: float16", "### Framework versions\n\n- PEFT 0.4.0\n\n- PEFT 0.4.0" ]
text-generation
transformers
# Uploaded model - **Developed by:** RonanMcGovern - **License:** apache-2.0 - **Finetuned from model :** unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft"], "base_model": "unsloth/tinyllama-bnb-4bit"}
RonanMcGovern/TinyLlama-1.1B-4bit-32rank-loftq-adapters
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/tinyllama-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:39:35+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: RonanMcGovern - License: apache-2.0 - Finetuned from model : unsloth/tinyllama-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: RonanMcGovern\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #sft #conversational #en #base_model-unsloth/tinyllama-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: RonanMcGovern\n- License: apache-2.0\n- Finetuned from model : unsloth/tinyllama-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# GreenBit LLMs This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance. Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information. ### Zero-shot Evaluation We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the `llm_eval` library and list the results below: | **Repository (Qwen Family)** | **Avg Acc.** | **OpenBQ** | **ARC-E** | **Winogr.** | **HellaS.** | **ARC-C** | **PIQA** | **BoolQ** | **RACE** | **ANLI-R1** | **ANLI-R2** | **ANLI-R3** | **WiC** | |:----------------------------------|:------------:|:------------:|:-----------:|:-------------:|:-------------:|:-----------:|:----------:|:-----------:|:-----------:|:-------------:|:-------------:|:-------------:|:---------:| | `Qwen-1.5-0.5B-layer-mix-bpw-2.2` | 0.398 | 0.170 | 0.443 | 0.527 | 0.332 | 0.238 | 0.634 | 0.620 | 0.318 | 0.332 | 0.338 | 0.330 | 0.500 | | `Qwen-1.5-0.5B-layer-mix-bpw-2.5` | 0.394 | 0.170 | 0.514 | 0.541 | 0.337 | 0.232 | 0.637 | 0.496 | 0.318 | 0.316 | 0.358 | 0.326 | 0.490 | | `Qwen-1.5-0.5B-layer-mix-bpw-3.0` | 0.407 | 0.198 | 0.533 | 0.536 | 0.348 | 0.234 | 0.671 | 0.552 | 0.323 | 0.330 | 0.333 | 0.335 | 0.495 | | `Qwen-1.5-1.8B-layer-mix-bpw-2.2` | 0.415 | 0.218 | 0.539 | 0.586 | 0.392 | 0.260 | 0.678 | 0.622 | 0.333 | 0.333 | 0.333 | 0.336 | 0.464 | | `Qwen-1.5-1.8B-layer-mix-bpw-2.5` | 0.423 | 0.222 | 0.592 | 0.585 | 0.406 | 0.267 | 0.695 | 0.629 | 0.336 | 0.314 | 0.339 | 0.361 | 0.507 | | `Qwen-1.5-1.8B-layer-mix-bpw-3.0` | 0.438 | 0.246 | 0.576 | 0.563 | 0.413 | 0.277 | 0.694 | 0.645 | 0.352 | 0.323 | 0.336 | 0.343 | 0.492 | | `Qwen-1.5-4B-layer-mix-bpw-2.2` | 0.480 | 0.254 | 0.663 | 0.623 | 0.463 | 0.339 | 0.712 | 0.718 | 0.349 | 0.326 | 0.355 | 0.384 | 0.513 | | `Qwen-1.5-4B-layer-mix-bpw-2.5` | 0.490 | 0.266 | 0.677 | 0.629 | 0.473 | 0.365 | 0.732 | 0.717 | 0.351 | 0.372 | 0.352 | 0.360 | 0.502 | | `Qwen-1.5-4B-layer-mix-bpw-3.0` | 0.502 | 0.268 | 0.678 | 0.642 | 0.494 | 0.358 | 0.755 | 0.757 | 0.380 | 0.395 | 0.395 | 0.392 | 0.519 | | `Qwen-1.5-7B-layer-mix-bpw-2.2` | 0.513 | 0.278 | 0.669 | 0.654 | 0.504 | 0.389 | 0.741 | 0.759 | 0.376 | 0.383 | 0.410 | 0.403 | 0.517 | | `Qwen-1.5-7B-layer-mix-bpw-2.5` | 0.520 | 0.294 | 0.705 | 0.650 | 0.520 | 0.387 | 0.750 | 0.769 | 0.371 | 0.445 | 0.424 | 0.398 | 0.564 | | `Qwen-1.5-7B-layer-mix-bpw-3.0` | 0.531 | 0.292 | 0.713 | 0.654 | 0.545 | 0.405 | 0.764 | 0.807 | 0.383 | 0.424 | 0.393 | 0.414 | 0.627 | | `Qwen-1.5-14B-layer-mix-bpw-2.5` | 0.553 | 0.318 | 0.727 | 0.682 | 0.564 | 0.413 | 0.775 | 0.792 | 0.390 | 0.472 | 0.434 | 0.446 | 0.623 | | `Qwen-1.5-32B-layer-mix-bpw-3.0` | 0.599 | 0.346 | 0.775 | 0.722 | 0.620 | 0.492 | 0.807 | 0.853 | 0.444 | 0.515 | 0.494 | 0.478 | 0.642 |
{"license": "apache-2.0"}
GreenBitAI/Qwen-1.5-0.5B-Chat-layer-mix-bpw-2.2
null
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-12T16:40:06+00:00
[]
[]
TAGS #transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
GreenBit LLMs ============= This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance. Please refer to our Github page for the code to run the model and more information. ### Zero-shot Evaluation We evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\_eval' library and list the results below:
[ "### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:" ]
[ "TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Zero-shot Evaluation\n\n\nWe evaluate the zero-shot ability of low-bit quantized Qwen1.5 models using the 'llm\\_eval' library and list the results below:" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # FFPP-Raw_1FPS_faces-expand-0-aligned_augmentation-normalize-image-mean-std This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0034 - Accuracy: 0.9983 - Recall: 0.9929 - Precision: 0.9994 - F1: 0.9961 - Roc Auc: 1.0000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 | Roc Auc | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------:| | 0.1054 | 1.0 | 1377 | 0.0750 | 0.9716 | 0.9180 | 0.9495 | 0.9335 | 0.9957 | | 0.0785 | 2.0 | 2755 | 0.0406 | 0.9853 | 0.9596 | 0.9723 | 0.9660 | 0.9986 | | 0.0713 | 3.0 | 4132 | 0.0348 | 0.9878 | 0.9534 | 0.9899 | 0.9713 | 0.9994 | | 0.0447 | 4.0 | 5510 | 0.0172 | 0.9933 | 0.9842 | 0.9851 | 0.9846 | 0.9997 | | 0.0388 | 5.0 | 6887 | 0.0186 | 0.9936 | 0.9741 | 0.9964 | 0.9851 | 0.9998 | | 0.0236 | 6.0 | 8265 | 0.0119 | 0.9957 | 0.9830 | 0.9971 | 0.9900 | 0.9999 | | 0.031 | 7.0 | 9642 | 0.0137 | 0.9957 | 0.9928 | 0.9873 | 0.9900 | 0.9999 | | 0.015 | 8.0 | 11020 | 0.0072 | 0.9972 | 0.9903 | 0.9969 | 0.9936 | 1.0000 | | 0.0429 | 9.0 | 12397 | 0.0087 | 0.9967 | 0.9863 | 0.9987 | 0.9925 | 0.9999 | | 0.0186 | 10.0 | 13775 | 0.0052 | 0.9979 | 0.9919 | 0.9985 | 0.9952 | 1.0000 | | 0.0282 | 11.0 | 15152 | 0.0069 | 0.9974 | 0.9892 | 0.9988 | 0.9940 | 1.0000 | | 0.0034 | 12.0 | 16530 | 0.0045 | 0.9979 | 0.9947 | 0.9956 | 0.9951 | 1.0000 | | 0.0187 | 13.0 | 17907 | 0.0070 | 0.9972 | 0.9886 | 0.9986 | 0.9935 | 1.0000 | | 0.0136 | 14.0 | 19285 | 0.0038 | 0.9982 | 0.9931 | 0.9988 | 0.9959 | 1.0000 | | 0.006 | 15.0 | 20662 | 0.0039 | 0.9982 | 0.9928 | 0.9988 | 0.9958 | 1.0000 | | 0.0067 | 16.0 | 22040 | 0.0037 | 0.9983 | 0.9926 | 0.9995 | 0.9960 | 1.0000 | | 0.0121 | 17.0 | 23417 | 0.0036 | 0.9983 | 0.9929 | 0.9992 | 0.9960 | 1.0000 | | 0.0026 | 18.0 | 24795 | 0.0037 | 0.9982 | 0.9925 | 0.9993 | 0.9959 | 1.0000 | | 0.0024 | 19.0 | 26172 | 0.0034 | 0.9983 | 0.9932 | 0.9991 | 0.9961 | 1.0000 | | 0.002 | 19.99 | 27540 | 0.0034 | 0.9983 | 0.9929 | 0.9994 | 0.9961 | 1.0000 | ### Framework versions - Transformers 4.39.2 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy", "recall", "precision", "f1"], "base_model": "microsoft/swin-tiny-patch4-window7-224", "model-index": [{"name": "FFPP-Raw_1FPS_faces-expand-0-aligned_augmentation-normalize-image-mean-std", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.998321005581522, "name": "Accuracy"}, {"type": "recall", "value": 0.9929003967425349, "name": "Recall"}, {"type": "precision", "value": 0.9993694829760403, "name": "Precision"}, {"type": "f1", "value": 0.9961244369959149, "name": "F1"}]}]}]}
hchcsuim/FFPP-Raw_1FPS_faces-expand-0-aligned_augmentation-normalize-image-mean-std
null
[ "transformers", "tensorboard", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-tiny-patch4-window7-224", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-12T16:42:17+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us
FFPP-Raw\_1FPS\_faces-expand-0-aligned\_augmentation-normalize-image-mean-std ============================================================================= This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 0.0034 * Accuracy: 0.9983 * Recall: 0.9929 * Precision: 0.9994 * F1: 0.9961 * Roc Auc: 1.0000 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 64 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 20 ### Training results ### Framework versions * Transformers 4.39.2 * Pytorch 2.2.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.2\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #swin #image-classification #generated_from_trainer #dataset-imagefolder #base_model-microsoft/swin-tiny-patch4-window7-224 #license-apache-2.0 #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.2\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]