pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) fairseq-dense-1.3B - bnb 4bits - Model creator: https://huggingface.co/KoboldAI/ - Original model: https://huggingface.co/KoboldAI/fairseq-dense-1.3B/ Original model description: --- language: en --- This is a Hugging Face transformers-compatible conversion of the original dense 1.3B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-1.3B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 31.66 | | ARC (25-shot) | 31.14 | | HellaSwag (10-shot) | 58.39 | | MMLU (5-shot) | 24.98 | | TruthfulQA (0-shot) | 37.43 | | Winogrande (5-shot) | 59.04 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 10.6 |
{}
RichardErkhov/KoboldAI_-_fairseq-dense-1.3B-4bits
null
[ "transformers", "safetensors", "xglm", "text-generation", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-17T10:27:25+00:00
[ "2112.10684" ]
[]
TAGS #transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models fairseq-dense-1.3B - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: en ------------ This is a Hugging Face transformers-compatible conversion of the original dense 1.3B-parameter model from the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" from Artetxe et al. Please refer to the original model card, which can be found at URL Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[]
[ "TAGS\n#transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
0x0mom/sl9
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T10:27:44+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_Instruction0_OASPL This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_OASPL", "results": []}]}
ThuyNT/CS505_COQE_viT5_train_Instruction0_OASPL
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T10:28:05+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# CS505_COQE_viT5_train_Instruction0_OASPL This model is a fine-tuned version of VietAI/vit5-large on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# CS505_COQE_viT5_train_Instruction0_OASPL\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# CS505_COQE_viT5_train_Instruction0_OASPL\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) fairseq-dense-1.3B - bnb 8bits - Model creator: https://huggingface.co/KoboldAI/ - Original model: https://huggingface.co/KoboldAI/fairseq-dense-1.3B/ Original model description: --- language: en --- This is a Hugging Face transformers-compatible conversion of the original dense 1.3B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-1.3B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 31.66 | | ARC (25-shot) | 31.14 | | HellaSwag (10-shot) | 58.39 | | MMLU (5-shot) | 24.98 | | TruthfulQA (0-shot) | 37.43 | | Winogrande (5-shot) | 59.04 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 10.6 |
{}
RichardErkhov/KoboldAI_-_fairseq-dense-1.3B-8bits
null
[ "transformers", "safetensors", "xglm", "text-generation", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-04-17T10:28:47+00:00
[ "2112.10684" ]
[]
TAGS #transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models fairseq-dense-1.3B - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: en ------------ This is a Hugging Face transformers-compatible conversion of the original dense 1.3B-parameter model from the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" from Artetxe et al. Please refer to the original model card, which can be found at URL Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[]
[ "TAGS\n#transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #8-bit #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) GPT-Neo-2.7B-Picard - bnb 4bits - Model creator: https://huggingface.co/KoboldAI/ - Original model: https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Picard/ Original model description: --- language: en license: mit --- # GPT-Neo 2.7B - Picard ## Model Description GPT-Neo 2.7B-Picard is a finetune created using EleutherAI's GPT-Neo 2.7B model. ## Training data The training data contains around 1800 ebooks, mostly in the sci-fi and fantasy genres. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='mrseeker87/GPT-Neo-2.7B-Picard') >>> generator("Jean-Luc Picard", do_sample=True, min_length=50) [{'generated_text': 'Jean-Luc Picard, the captain of a Federation starship in command of one of Starfleet's few fulltime scientists.'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software: ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } ```
{}
RichardErkhov/KoboldAI_-_GPT-Neo-2.7B-Picard-4bits
null
[ "transformers", "safetensors", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-17T10:29:06+00:00
[]
[]
TAGS #transformers #safetensors #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models GPT-Neo-2.7B-Picard - bnb 4bits - Model creator: URL - Original model: URL Original model description: --- language: en license: mit --- # GPT-Neo 2.7B - Picard ## Model Description GPT-Neo 2.7B-Picard is a finetune created using EleutherAI's GPT-Neo 2.7B model. ## Training data The training data contains around 1800 ebooks, mostly in the sci-fi and fantasy genres. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software:
[ "# GPT-Neo 2.7B - Picard", "## Model Description\nGPT-Neo 2.7B-Picard is a finetune created using EleutherAI's GPT-Neo 2.7B model.", "## Training data\nThe training data contains around 1800 ebooks, mostly in the sci-fi and fantasy genres.", "### How to use\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.", "### BibTeX entry and citation info\nThe model is made using the following software:" ]
[ "TAGS\n#transformers #safetensors #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #4-bit #region-us \n", "# GPT-Neo 2.7B - Picard", "## Model Description\nGPT-Neo 2.7B-Picard is a finetune created using EleutherAI's GPT-Neo 2.7B model.", "## Training data\nThe training data contains around 1800 ebooks, mostly in the sci-fi and fantasy genres.", "### How to use\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.", "### BibTeX entry and citation info\nThe model is made using the following software:" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) fairseq-dense-125M - bnb 4bits - Model creator: https://huggingface.co/KoboldAI/ - Original model: https://huggingface.co/KoboldAI/fairseq-dense-125M/ Original model description: --- language: en --- This is a Hugging Face transformers-compatible conversion of the original dense 125M-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-125M) | Metric | Value | |-----------------------|---------------------------| | Avg. | 26.0 | | ARC (25-shot) | 24.06 | | HellaSwag (10-shot) | 34.14 | | MMLU (5-shot) | 23.98 | | TruthfulQA (0-shot) | 43.72 | | Winogrande (5-shot) | 50.59 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 5.5 |
{}
RichardErkhov/KoboldAI_-_fairseq-dense-125M-4bits
null
[ "transformers", "safetensors", "xglm", "text-generation", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-17T10:29:14+00:00
[ "2112.10684" ]
[]
TAGS #transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models fairseq-dense-125M - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: en ------------ This is a Hugging Face transformers-compatible conversion of the original dense 125M-parameter model from the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" from Artetxe et al. Please refer to the original model card, which can be found at URL Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[]
[ "TAGS\n#transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) fairseq-dense-125M - bnb 8bits - Model creator: https://huggingface.co/KoboldAI/ - Original model: https://huggingface.co/KoboldAI/fairseq-dense-125M/ Original model description: --- language: en --- This is a Hugging Face transformers-compatible conversion of the original dense 125M-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-125M) | Metric | Value | |-----------------------|---------------------------| | Avg. | 26.0 | | ARC (25-shot) | 24.06 | | HellaSwag (10-shot) | 34.14 | | MMLU (5-shot) | 23.98 | | TruthfulQA (0-shot) | 43.72 | | Winogrande (5-shot) | 50.59 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 5.5 |
{}
RichardErkhov/KoboldAI_-_fairseq-dense-125M-8bits
null
[ "transformers", "safetensors", "xglm", "text-generation", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-04-17T10:29:39+00:00
[ "2112.10684" ]
[]
TAGS #transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models fairseq-dense-125M - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: en ------------ This is a Hugging Face transformers-compatible conversion of the original dense 125M-parameter model from the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" from Artetxe et al. Please refer to the original model card, which can be found at URL Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[]
[ "TAGS\n#transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #8-bit #region-us \n" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_Instruction0_AOSPL This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_AOSPL", "results": []}]}
ThuyNT/CS505_COQE_viT5_train_Instruction0_AOSPL
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T10:30:22+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# CS505_COQE_viT5_train_Instruction0_AOSPL This model is a fine-tuned version of VietAI/vit5-large on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# CS505_COQE_viT5_train_Instruction0_AOSPL\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# CS505_COQE_viT5_train_Instruction0_AOSPL\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) GPT-Neo-2.7B-Picard - bnb 8bits - Model creator: https://huggingface.co/KoboldAI/ - Original model: https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Picard/ Original model description: --- language: en license: mit --- # GPT-Neo 2.7B - Picard ## Model Description GPT-Neo 2.7B-Picard is a finetune created using EleutherAI's GPT-Neo 2.7B model. ## Training data The training data contains around 1800 ebooks, mostly in the sci-fi and fantasy genres. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ```py >>> from transformers import pipeline >>> generator = pipeline('text-generation', model='mrseeker87/GPT-Neo-2.7B-Picard') >>> generator("Jean-Luc Picard", do_sample=True, min_length=50) [{'generated_text': 'Jean-Luc Picard, the captain of a Federation starship in command of one of Starfleet's few fulltime scientists.'}] ``` ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software: ```bibtex @software{gpt-neo, author = {Black, Sid and Leo, Gao and Wang, Phil and Leahy, Connor and Biderman, Stella}, title = {{GPT-Neo: Large Scale Autoregressive Language Modeling with Mesh-Tensorflow}}, month = mar, year = 2021, note = {{If you use this software, please cite it using these metadata.}}, publisher = {Zenodo}, version = {1.0}, doi = {10.5281/zenodo.5297715}, url = {https://doi.org/10.5281/zenodo.5297715} } ```
{}
RichardErkhov/KoboldAI_-_GPT-Neo-2.7B-Picard-8bits
null
[ "transformers", "safetensors", "gpt_neo", "text-generation", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-04-17T10:31:31+00:00
[]
[]
TAGS #transformers #safetensors #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models GPT-Neo-2.7B-Picard - bnb 8bits - Model creator: URL - Original model: URL Original model description: --- language: en license: mit --- # GPT-Neo 2.7B - Picard ## Model Description GPT-Neo 2.7B-Picard is a finetune created using EleutherAI's GPT-Neo 2.7B model. ## Training data The training data contains around 1800 ebooks, mostly in the sci-fi and fantasy genres. ### How to use You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run: ### Limitations and Biases GPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. GPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ### BibTeX entry and citation info The model is made using the following software:
[ "# GPT-Neo 2.7B - Picard", "## Model Description\nGPT-Neo 2.7B-Picard is a finetune created using EleutherAI's GPT-Neo 2.7B model.", "## Training data\nThe training data contains around 1800 ebooks, mostly in the sci-fi and fantasy genres.", "### How to use\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.", "### BibTeX entry and citation info\nThe model is made using the following software:" ]
[ "TAGS\n#transformers #safetensors #gpt_neo #text-generation #autotrain_compatible #endpoints_compatible #8-bit #region-us \n", "# GPT-Neo 2.7B - Picard", "## Model Description\nGPT-Neo 2.7B-Picard is a finetune created using EleutherAI's GPT-Neo 2.7B model.", "## Training data\nThe training data contains around 1800 ebooks, mostly in the sci-fi and fantasy genres.", "### How to use\nYou can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:", "### Limitations and Biases\nGPT-Neo was trained as an autoregressive language model. This means that its core functionality is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work.\nGPT-Neo was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending on your usecase GPT-Neo may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile.\nAs with all language models, it is hard to predict in advance how GPT-Neo will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results.", "### BibTeX entry and citation info\nThe model is made using the following software:" ]
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-minds-4 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the minds14 dataset. It achieves the following results on the evaluation set: - Loss: 2.6086 - Accuracy: 0.1327 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 0.8 | 3 | 2.6369 | 0.0885 | | No log | 1.87 | 7 | 2.6276 | 0.0885 | | 2.6323 | 2.93 | 11 | 2.6218 | 0.1062 | | 2.6323 | 4.0 | 15 | 2.6169 | 0.1062 | | 2.6323 | 4.8 | 18 | 2.6137 | 0.0973 | | 2.6043 | 5.87 | 22 | 2.6114 | 0.1327 | | 2.6043 | 6.93 | 26 | 2.6093 | 0.1239 | | 2.5836 | 8.0 | 30 | 2.6086 | 0.1327 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["minds14"], "metrics": ["accuracy"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "wav2vec2-base-finetuned-minds-4", "results": [{"task": {"type": "audio-classification", "name": "Audio Classification"}, "dataset": {"name": "minds14", "type": "minds14", "config": "en-US", "split": "train", "args": "en-US"}, "metrics": [{"type": "accuracy", "value": 0.13274336283185842, "name": "Accuracy"}]}]}]}
saketag73/wav2vec2-base-finetuned-minds-4
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:minds14", "base_model:facebook/wav2vec2-base", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-17T10:31:35+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #dataset-minds14 #base_model-facebook/wav2vec2-base #license-apache-2.0 #model-index #endpoints_compatible #region-us
wav2vec2-base-finetuned-minds-4 =============================== This model is a fine-tuned version of facebook/wav2vec2-base on the minds14 dataset. It achieves the following results on the evaluation set: * Loss: 2.6086 * Accuracy: 0.1327 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 3e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #audio-classification #generated_from_trainer #dataset-minds14 #base_model-facebook/wav2vec2-base #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 3e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
#HP discord #Działaj do cholery #Ty dziadzie
{"tags": ["conversational"]}
RadoAi/disco-hp
null
[ "transformers", "safetensors", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2024-04-17T10:32:15+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
#HP discord #Działaj do cholery #Ty dziadzie
[]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) fairseq-dense-2.7B - bnb 4bits - Model creator: https://huggingface.co/KoboldAI/ - Original model: https://huggingface.co/KoboldAI/fairseq-dense-2.7B/ Original model description: --- language: en --- This is a Hugging Face transformers-compatible conversion of the original dense 2.7B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-2.7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 33.67 | | ARC (25-shot) | 33.79 | | HellaSwag (10-shot) | 65.74 | | MMLU (5-shot) | 26.44 | | TruthfulQA (0-shot) | 34.57 | | Winogrande (5-shot) | 63.93 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 11.24 |
{}
RichardErkhov/KoboldAI_-_fairseq-dense-2.7B-4bits
null
[ "transformers", "safetensors", "xglm", "text-generation", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-17T10:33:13+00:00
[ "2112.10684" ]
[]
TAGS #transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models fairseq-dense-2.7B - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: en ------------ This is a Hugging Face transformers-compatible conversion of the original dense 2.7B-parameter model from the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" from Artetxe et al. Please refer to the original model card, which can be found at URL Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[]
[ "TAGS\n#transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) fairseq-dense-355M - bnb 4bits - Model creator: https://huggingface.co/KoboldAI/ - Original model: https://huggingface.co/KoboldAI/fairseq-dense-355M/ Original model description: --- language: en --- This is a Hugging Face transformers-compatible conversion of the original dense 355M-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-355M) | Metric | Value | |-----------------------|---------------------------| | Avg. | 27.99 | | ARC (25-shot) | 25.43 | | HellaSwag (10-shot) | 46.67 | | MMLU (5-shot) | 25.3 | | TruthfulQA (0-shot) | 39.19 | | Winogrande (5-shot) | 52.88 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 6.48 |
{}
RichardErkhov/KoboldAI_-_fairseq-dense-355M-4bits
null
[ "transformers", "safetensors", "xglm", "text-generation", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-17T10:33:47+00:00
[ "2112.10684" ]
[]
TAGS #transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models fairseq-dense-355M - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: en ------------ This is a Hugging Face transformers-compatible conversion of the original dense 355M-parameter model from the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" from Artetxe et al. Please refer to the original model card, which can be found at URL Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[]
[ "TAGS\n#transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n" ]
text-generation
transformers
# WizardLM-2-4x7B-MoE-exl2-4_25bpw This is a quantized version of [WizardLM-2-4x7B-MoE](https://huggingface.co/Skylaude/WizardLM-2-4x7B-MoE) an experimental MoE model made with [Mergekit](https://github.com/arcee-ai/mergekit). Quantization was done using version 0.0.18 of [ExLlamaV2](https://github.com/turboderp/exllamav2). Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended. For more information see the [original repository](https://huggingface.co/Skylaude/WizardLM-2-4x7B-MoE).
{"license": "apache-2.0", "tags": ["MoE", "merge", "mergekit", "Mistral", "Microsoft/WizardLM-2-7B"]}
Skylaude/WizardLM-2-4x7B-MoE-exl2-4_25bpw
null
[ "transformers", "safetensors", "mixtral", "text-generation", "MoE", "merge", "mergekit", "Mistral", "Microsoft/WizardLM-2-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T10:34:10+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #MoE #merge #mergekit #Mistral #Microsoft/WizardLM-2-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# WizardLM-2-4x7B-MoE-exl2-4_25bpw This is a quantized version of WizardLM-2-4x7B-MoE an experimental MoE model made with Mergekit. Quantization was done using version 0.0.18 of ExLlamaV2. Please be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended. For more information see the original repository.
[ "# WizardLM-2-4x7B-MoE-exl2-4_25bpw\n\nThis is a quantized version of WizardLM-2-4x7B-MoE an experimental MoE model made with Mergekit. Quantization was done using version 0.0.18 of ExLlamaV2. \n\nPlease be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.\n\nFor more information see the original repository." ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #MoE #merge #mergekit #Mistral #Microsoft/WizardLM-2-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# WizardLM-2-4x7B-MoE-exl2-4_25bpw\n\nThis is a quantized version of WizardLM-2-4x7B-MoE an experimental MoE model made with Mergekit. Quantization was done using version 0.0.18 of ExLlamaV2. \n\nPlease be sure to set experts per token to 4 for the best results! Context length should be the same as Mistral-7B-Instruct-v0.1 (8k tokens). For instruction templates, Vicuna-v1.1 is recommended.\n\nFor more information see the original repository." ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Heejindo/r8uuu
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T10:34:20+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) fairseq-dense-355M - bnb 8bits - Model creator: https://huggingface.co/KoboldAI/ - Original model: https://huggingface.co/KoboldAI/fairseq-dense-355M/ Original model description: --- language: en --- This is a Hugging Face transformers-compatible conversion of the original dense 355M-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-355M) | Metric | Value | |-----------------------|---------------------------| | Avg. | 27.99 | | ARC (25-shot) | 25.43 | | HellaSwag (10-shot) | 46.67 | | MMLU (5-shot) | 25.3 | | TruthfulQA (0-shot) | 39.19 | | Winogrande (5-shot) | 52.88 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 6.48 |
{}
RichardErkhov/KoboldAI_-_fairseq-dense-355M-8bits
null
[ "transformers", "safetensors", "xglm", "text-generation", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-04-17T10:34:23+00:00
[ "2112.10684" ]
[]
TAGS #transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models fairseq-dense-355M - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: en ------------ This is a Hugging Face transformers-compatible conversion of the original dense 355M-parameter model from the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" from Artetxe et al. Please refer to the original model card, which can be found at URL Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[]
[ "TAGS\n#transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #8-bit #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) GePpeTto - bnb 4bits - Model creator: https://huggingface.co/LorenzoDeMattei/ - Original model: https://huggingface.co/LorenzoDeMattei/GePpeTto/ Original model description: --- language: it --- # GePpeTto GPT2 Model 🇮🇹 Pretrained GPT2 117M model for Italian. You can find further details in the paper: Lorenzo De Mattei, Michele Cafagna, Felice Dell’Orletta, Malvina Nissim, Marco Guerini "GePpeTto Carves Italian into a Language Model", arXiv preprint. Pdf available at: https://arxiv.org/abs/2004.14253 ## Pretraining Corpus The pretraining set comprises two main sources. The first one is a dump of Italian Wikipedia (November 2019), consisting of 2.8GB of text. The second one is the ItWac corpus (Baroni et al., 2009), which amounts to 11GB of web texts. This collection provides a mix of standard and less standard Italian, on a rather wide chronological span, with older texts than the Wikipedia dump (the latter stretches only to the late 2000s). ## Pretraining details This model was trained using GPT2's Hugging Face implemenation on 4 NVIDIA Tesla T4 GPU for 620k steps. Training parameters: - GPT-2 small configuration - vocabulary size: 30k - Batch size: 32 - Block size: 100 - Adam Optimizer - Initial learning rate: 5e-5 - Warm up steps: 10k ## Perplexity scores | Domain | Perplexity | |---|---| | Wikipedia | 26.1052 | | ItWac | 30.3965 | | Legal | 37.2197 | | News | 45.3859 | | Social Media | 84.6408 | For further details, qualitative analysis and human evaluation check out: https://arxiv.org/abs/2004.14253 ## Load Pretrained Model You can use this model by installing Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import GPT2Tokenizer, GPT2Model model = GPT2Model.from_pretrained('LorenzoDeMattei/GePpeTto') tokenizer = GPT2Tokenizer.from_pretrained( 'LorenzoDeMattei/GePpeTto', ) ``` ## Example using GPT2LMHeadModel ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline, GPT2Tokenizer tokenizer = AutoTokenizer.from_pretrained("LorenzoDeMattei/GePpeTto") model = AutoModelWithLMHead.from_pretrained("LorenzoDeMattei/GePpeTto") text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer) prompts = [ "Wikipedia Geppetto", "Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso"] samples_outputs = text_generator( prompts, do_sample=True, max_length=50, top_k=50, top_p=0.95, num_return_sequences=3 ) for i, sample_outputs in enumerate(samples_outputs): print(100 * '-') print("Prompt:", prompts[i]) for sample_output in sample_outputs: print("Sample:", sample_output['generated_text']) print() ``` Output is, ``` ---------------------------------------------------------------------------------------------------- Prompt: Wikipedia Geppetto Sample: Wikipedia Geppetto rosso (film 1920) Geppetto rosso ("The Smokes in the Black") è un film muto del 1920 diretto da Henry H. Leonard. Il film fu prodotto dalla Selig Poly Sample: Wikipedia Geppetto Geppetto ("Geppetto" in piemontese) è un comune italiano di 978 abitanti della provincia di Cuneo in Piemonte. L'abitato, che si trova nel versante valtellinese, si sviluppa nella Sample: Wikipedia Geppetto di Natale (romanzo) Geppetto di Natale è un romanzo di Mario Caiano, pubblicato nel 2012. ---------------------------------------------------------------------------------------------------- Prompt: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso. Il burattino riesce a scappare. Dopo aver trovato un prezioso sacchetto si reca Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso, e l'unico che lo possiede, ma, di fronte a tutte queste prove Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso: - A voi gli occhi, le guance! A voi il mio pezzo! ``` ## Citation Please use the following bibtex entry: ``` @misc{mattei2020geppetto, title={GePpeTto Carves Italian into a Language Model}, author={Lorenzo De Mattei and Michele Cafagna and Felice Dell'Orletta and Malvina Nissim and Marco Guerini}, year={2020}, eprint={2004.14253}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## References Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed webcrawled corpora. Language resources and evaluation, 43(3):209–226.
{}
RichardErkhov/LorenzoDeMattei_-_GePpeTto-4bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:2004.14253", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-17T10:35:23+00:00
[ "2004.14253" ]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #arxiv-2004.14253 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models GePpeTto - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: it ------------ GePpeTto GPT2 Model 🇮🇹 ====================== Pretrained GPT2 117M model for Italian. You can find further details in the paper: Lorenzo De Mattei, Michele Cafagna, Felice Dell’Orletta, Malvina Nissim, Marco Guerini "GePpeTto Carves Italian into a Language Model", arXiv preprint. Pdf available at: URL Pretraining Corpus ------------------ The pretraining set comprises two main sources. The first one is a dump of Italian Wikipedia (November 2019), consisting of 2.8GB of text. The second one is the ItWac corpus (Baroni et al., 2009), which amounts to 11GB of web texts. This collection provides a mix of standard and less standard Italian, on a rather wide chronological span, with older texts than the Wikipedia dump (the latter stretches only to the late 2000s). Pretraining details ------------------- This model was trained using GPT2's Hugging Face implemenation on 4 NVIDIA Tesla T4 GPU for 620k steps. Training parameters: * GPT-2 small configuration * vocabulary size: 30k * Batch size: 32 * Block size: 100 * Adam Optimizer * Initial learning rate: 5e-5 * Warm up steps: 10k Perplexity scores ----------------- For further details, qualitative analysis and human evaluation check out: URL Load Pretrained Model --------------------- You can use this model by installing Huggingface library 'transformers'. And you can use it directly by initializing it like this: Example using GPT2LMHeadModel ----------------------------- Output is, Please use the following bibtex entry: References ---------- Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed webcrawled corpora. Language resources and evaluation, 43(3):209–226.
[]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-2004.14253 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n" ]
null
transformers
DPO of MonaTrix-v4 with this dataset: https://huggingface.co/datasets/CultriX/dpo-mix-ambrosia-cleaned --- tags: - merge - mergekit - lazymergekit - Kukedlc/NeuralMaxime-7B-slerp - eren23/ogno-monarch-jaskier-merge-7b - eren23/dpo-binarized-NeutrixOmnibe-7B base_model: - Kukedlc/NeuralMaxime-7B-slerp - eren23/ogno-monarch-jaskier-merge-7b - eren23/dpo-binarized-NeutrixOmnibe-7B license: apache-2.0 --- # MonaTrix-v4 MonaTrix-v4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [Kukedlc/NeuralMaxime-7B-slerp](https://huggingface.co/Kukedlc/NeuralMaxime-7B-slerp) * [eren23/ogno-monarch-jaskier-merge-7b](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b) * [eren23/dpo-binarized-NeutrixOmnibe-7B](https://huggingface.co/eren23/dpo-binarized-NeutrixOmnibe-7B) ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # No parameters necessary for base model - model: Kukedlc/NeuralMaxime-7B-slerp #Emphasize the beginning of Vicuna format models parameters: weight: 0.36 density: 0.65 - model: eren23/ogno-monarch-jaskier-merge-7b parameters: weight: 0.34 density: 0.6 # Vicuna format - model: eren23/dpo-binarized-NeutrixOmnibe-7B parameters: weight: 0.3 density: 0.6 merge_method: dare_ties base_model: mistralai/Mistral-7B-v0.1 parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "CultriX/MonaTrix-v4" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0"}
CultriX/MonaTrix-v4-7B-DPO
null
[ "transformers", "safetensors", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-17T10:35:26+00:00
[]
[]
TAGS #transformers #safetensors #license-apache-2.0 #endpoints_compatible #region-us
DPO of MonaTrix-v4 with this dataset: URL --- tags: - merge - mergekit - lazymergekit - Kukedlc/NeuralMaxime-7B-slerp - eren23/ogno-monarch-jaskier-merge-7b - eren23/dpo-binarized-NeutrixOmnibe-7B base_model: - Kukedlc/NeuralMaxime-7B-slerp - eren23/ogno-monarch-jaskier-merge-7b - eren23/dpo-binarized-NeutrixOmnibe-7B license: apache-2.0 --- # MonaTrix-v4 MonaTrix-v4 is a merge of the following models using LazyMergekit: * Kukedlc/NeuralMaxime-7B-slerp * eren23/ogno-monarch-jaskier-merge-7b * eren23/dpo-binarized-NeutrixOmnibe-7B ## Configuration ## Usage
[ "# MonaTrix-v4\n\nMonaTrix-v4 is a merge of the following models using LazyMergekit:\n* Kukedlc/NeuralMaxime-7B-slerp\n* eren23/ogno-monarch-jaskier-merge-7b\n* eren23/dpo-binarized-NeutrixOmnibe-7B", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #license-apache-2.0 #endpoints_compatible #region-us \n", "# MonaTrix-v4\n\nMonaTrix-v4 is a merge of the following models using LazyMergekit:\n* Kukedlc/NeuralMaxime-7B-slerp\n* eren23/ogno-monarch-jaskier-merge-7b\n* eren23/dpo-binarized-NeutrixOmnibe-7B", "## Configuration", "## Usage" ]
null
peft
## Training procedure The following `bitsandbytes` quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0
{"library_name": "peft"}
harish2962/New_tinyllama_Disease_Symptom
null
[ "peft", "safetensors", "llama", "region:us" ]
null
2024-04-17T10:35:41+00:00
[]
[]
TAGS #peft #safetensors #llama #region-us
## Training procedure The following 'bitsandbytes' quantization config was used during training: - load_in_8bit: False - load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: nf4 - bnb_4bit_use_double_quant: True - bnb_4bit_compute_dtype: bfloat16 ### Framework versions - PEFT 0.4.0
[ "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16", "### Framework versions\n\n\n- PEFT 0.4.0" ]
[ "TAGS\n#peft #safetensors #llama #region-us \n", "## Training procedure\n\n\nThe following 'bitsandbytes' quantization config was used during training:\n- load_in_8bit: False\n- load_in_4bit: True\n- llm_int8_threshold: 6.0\n- llm_int8_skip_modules: None\n- llm_int8_enable_fp32_cpu_offload: False\n- llm_int8_has_fp16_weight: False\n- bnb_4bit_quant_type: nf4\n- bnb_4bit_use_double_quant: True\n- bnb_4bit_compute_dtype: bfloat16", "### Framework versions\n\n\n- PEFT 0.4.0" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) GePpeTto - bnb 8bits - Model creator: https://huggingface.co/LorenzoDeMattei/ - Original model: https://huggingface.co/LorenzoDeMattei/GePpeTto/ Original model description: --- language: it --- # GePpeTto GPT2 Model 🇮🇹 Pretrained GPT2 117M model for Italian. You can find further details in the paper: Lorenzo De Mattei, Michele Cafagna, Felice Dell’Orletta, Malvina Nissim, Marco Guerini "GePpeTto Carves Italian into a Language Model", arXiv preprint. Pdf available at: https://arxiv.org/abs/2004.14253 ## Pretraining Corpus The pretraining set comprises two main sources. The first one is a dump of Italian Wikipedia (November 2019), consisting of 2.8GB of text. The second one is the ItWac corpus (Baroni et al., 2009), which amounts to 11GB of web texts. This collection provides a mix of standard and less standard Italian, on a rather wide chronological span, with older texts than the Wikipedia dump (the latter stretches only to the late 2000s). ## Pretraining details This model was trained using GPT2's Hugging Face implemenation on 4 NVIDIA Tesla T4 GPU for 620k steps. Training parameters: - GPT-2 small configuration - vocabulary size: 30k - Batch size: 32 - Block size: 100 - Adam Optimizer - Initial learning rate: 5e-5 - Warm up steps: 10k ## Perplexity scores | Domain | Perplexity | |---|---| | Wikipedia | 26.1052 | | ItWac | 30.3965 | | Legal | 37.2197 | | News | 45.3859 | | Social Media | 84.6408 | For further details, qualitative analysis and human evaluation check out: https://arxiv.org/abs/2004.14253 ## Load Pretrained Model You can use this model by installing Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import GPT2Tokenizer, GPT2Model model = GPT2Model.from_pretrained('LorenzoDeMattei/GePpeTto') tokenizer = GPT2Tokenizer.from_pretrained( 'LorenzoDeMattei/GePpeTto', ) ``` ## Example using GPT2LMHeadModel ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline, GPT2Tokenizer tokenizer = AutoTokenizer.from_pretrained("LorenzoDeMattei/GePpeTto") model = AutoModelWithLMHead.from_pretrained("LorenzoDeMattei/GePpeTto") text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer) prompts = [ "Wikipedia Geppetto", "Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso"] samples_outputs = text_generator( prompts, do_sample=True, max_length=50, top_k=50, top_p=0.95, num_return_sequences=3 ) for i, sample_outputs in enumerate(samples_outputs): print(100 * '-') print("Prompt:", prompts[i]) for sample_output in sample_outputs: print("Sample:", sample_output['generated_text']) print() ``` Output is, ``` ---------------------------------------------------------------------------------------------------- Prompt: Wikipedia Geppetto Sample: Wikipedia Geppetto rosso (film 1920) Geppetto rosso ("The Smokes in the Black") è un film muto del 1920 diretto da Henry H. Leonard. Il film fu prodotto dalla Selig Poly Sample: Wikipedia Geppetto Geppetto ("Geppetto" in piemontese) è un comune italiano di 978 abitanti della provincia di Cuneo in Piemonte. L'abitato, che si trova nel versante valtellinese, si sviluppa nella Sample: Wikipedia Geppetto di Natale (romanzo) Geppetto di Natale è un romanzo di Mario Caiano, pubblicato nel 2012. ---------------------------------------------------------------------------------------------------- Prompt: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso. Il burattino riesce a scappare. Dopo aver trovato un prezioso sacchetto si reca Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso, e l'unico che lo possiede, ma, di fronte a tutte queste prove Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso: - A voi gli occhi, le guance! A voi il mio pezzo! ``` ## Citation Please use the following bibtex entry: ``` @misc{mattei2020geppetto, title={GePpeTto Carves Italian into a Language Model}, author={Lorenzo De Mattei and Michele Cafagna and Felice Dell'Orletta and Malvina Nissim and Marco Guerini}, year={2020}, eprint={2004.14253}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## References Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed webcrawled corpora. Language resources and evaluation, 43(3):209–226.
{}
RichardErkhov/LorenzoDeMattei_-_GePpeTto-8bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:2004.14253", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-17T10:35:48+00:00
[ "2004.14253" ]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #arxiv-2004.14253 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models GePpeTto - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: it ------------ GePpeTto GPT2 Model 🇮🇹 ====================== Pretrained GPT2 117M model for Italian. You can find further details in the paper: Lorenzo De Mattei, Michele Cafagna, Felice Dell’Orletta, Malvina Nissim, Marco Guerini "GePpeTto Carves Italian into a Language Model", arXiv preprint. Pdf available at: URL Pretraining Corpus ------------------ The pretraining set comprises two main sources. The first one is a dump of Italian Wikipedia (November 2019), consisting of 2.8GB of text. The second one is the ItWac corpus (Baroni et al., 2009), which amounts to 11GB of web texts. This collection provides a mix of standard and less standard Italian, on a rather wide chronological span, with older texts than the Wikipedia dump (the latter stretches only to the late 2000s). Pretraining details ------------------- This model was trained using GPT2's Hugging Face implemenation on 4 NVIDIA Tesla T4 GPU for 620k steps. Training parameters: * GPT-2 small configuration * vocabulary size: 30k * Batch size: 32 * Block size: 100 * Adam Optimizer * Initial learning rate: 5e-5 * Warm up steps: 10k Perplexity scores ----------------- For further details, qualitative analysis and human evaluation check out: URL Load Pretrained Model --------------------- You can use this model by installing Huggingface library 'transformers'. And you can use it directly by initializing it like this: Example using GPT2LMHeadModel ----------------------------- Output is, Please use the following bibtex entry: References ---------- Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed webcrawled corpora. Language resources and evaluation, 43(3):209–226.
[]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-2004.14253 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) fairseq-dense-2.7B - bnb 8bits - Model creator: https://huggingface.co/KoboldAI/ - Original model: https://huggingface.co/KoboldAI/fairseq-dense-2.7B/ Original model description: --- language: en --- This is a Hugging Face transformers-compatible conversion of the original dense 2.7B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-2.7B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 33.67 | | ARC (25-shot) | 33.79 | | HellaSwag (10-shot) | 65.74 | | MMLU (5-shot) | 26.44 | | TruthfulQA (0-shot) | 34.57 | | Winogrande (5-shot) | 63.93 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 11.24 |
{}
RichardErkhov/KoboldAI_-_fairseq-dense-2.7B-8bits
null
[ "transformers", "safetensors", "xglm", "text-generation", "arxiv:2112.10684", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-04-17T10:36:06+00:00
[ "2112.10684" ]
[]
TAGS #transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models fairseq-dense-2.7B - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: en ------------ This is a Hugging Face transformers-compatible conversion of the original dense 2.7B-parameter model from the paper "Efficient Large Scale Language Modeling with Mixtures of Experts" from Artetxe et al. Please refer to the original model card, which can be found at URL Open LLM Leaderboard Evaluation Results ======================================= Detailed results can be found here
[]
[ "TAGS\n#transformers #safetensors #xglm #text-generation #arxiv-2112.10684 #autotrain_compatible #endpoints_compatible #8-bit #region-us \n" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) GePpeTto - GGUF - Model creator: https://huggingface.co/LorenzoDeMattei/ - Original model: https://huggingface.co/LorenzoDeMattei/GePpeTto/ | Name | Quant method | Size | | ---- | ---- | ---- | | [GePpeTto.Q2_K.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q2_K.gguf) | Q2_K | 0.06GB | | [GePpeTto.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.IQ3_XS.gguf) | IQ3_XS | 0.06GB | | [GePpeTto.IQ3_S.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.IQ3_S.gguf) | IQ3_S | 0.06GB | | [GePpeTto.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q3_K_S.gguf) | Q3_K_S | 0.06GB | | [GePpeTto.IQ3_M.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.IQ3_M.gguf) | IQ3_M | 0.07GB | | [GePpeTto.Q3_K.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q3_K.gguf) | Q3_K | 0.07GB | | [GePpeTto.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q3_K_M.gguf) | Q3_K_M | 0.07GB | | [GePpeTto.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q3_K_L.gguf) | Q3_K_L | 0.07GB | | [GePpeTto.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.IQ4_XS.gguf) | IQ4_XS | 0.07GB | | [GePpeTto.Q4_0.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q4_0.gguf) | Q4_0 | 0.08GB | | [GePpeTto.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.IQ4_NL.gguf) | IQ4_NL | 0.08GB | | [GePpeTto.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q4_K_S.gguf) | Q4_K_S | 0.08GB | | [GePpeTto.Q4_K.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q4_K.gguf) | Q4_K | 0.08GB | | [GePpeTto.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q4_K_M.gguf) | Q4_K_M | 0.08GB | | [GePpeTto.Q4_1.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q4_1.gguf) | Q4_1 | 0.08GB | | [GePpeTto.Q5_0.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q5_0.gguf) | Q5_0 | 0.09GB | | [GePpeTto.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q5_K_S.gguf) | Q5_K_S | 0.09GB | | [GePpeTto.Q5_K.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q5_K.gguf) | Q5_K | 0.09GB | | [GePpeTto.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q5_K_M.gguf) | Q5_K_M | 0.09GB | | [GePpeTto.Q5_1.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q5_1.gguf) | Q5_1 | 0.1GB | | [GePpeTto.Q6_K.gguf](https://huggingface.co/RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf/blob/main/GePpeTto.Q6_K.gguf) | Q6_K | 0.1GB | Original model description: --- language: it --- # GePpeTto GPT2 Model 🇮🇹 Pretrained GPT2 117M model for Italian. You can find further details in the paper: Lorenzo De Mattei, Michele Cafagna, Felice Dell’Orletta, Malvina Nissim, Marco Guerini "GePpeTto Carves Italian into a Language Model", arXiv preprint. Pdf available at: https://arxiv.org/abs/2004.14253 ## Pretraining Corpus The pretraining set comprises two main sources. The first one is a dump of Italian Wikipedia (November 2019), consisting of 2.8GB of text. The second one is the ItWac corpus (Baroni et al., 2009), which amounts to 11GB of web texts. This collection provides a mix of standard and less standard Italian, on a rather wide chronological span, with older texts than the Wikipedia dump (the latter stretches only to the late 2000s). ## Pretraining details This model was trained using GPT2's Hugging Face implemenation on 4 NVIDIA Tesla T4 GPU for 620k steps. Training parameters: - GPT-2 small configuration - vocabulary size: 30k - Batch size: 32 - Block size: 100 - Adam Optimizer - Initial learning rate: 5e-5 - Warm up steps: 10k ## Perplexity scores | Domain | Perplexity | |---|---| | Wikipedia | 26.1052 | | ItWac | 30.3965 | | Legal | 37.2197 | | News | 45.3859 | | Social Media | 84.6408 | For further details, qualitative analysis and human evaluation check out: https://arxiv.org/abs/2004.14253 ## Load Pretrained Model You can use this model by installing Huggingface library `transformers`. And you can use it directly by initializing it like this: ```python from transformers import GPT2Tokenizer, GPT2Model model = GPT2Model.from_pretrained('LorenzoDeMattei/GePpeTto') tokenizer = GPT2Tokenizer.from_pretrained( 'LorenzoDeMattei/GePpeTto', ) ``` ## Example using GPT2LMHeadModel ```python from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline, GPT2Tokenizer tokenizer = AutoTokenizer.from_pretrained("LorenzoDeMattei/GePpeTto") model = AutoModelWithLMHead.from_pretrained("LorenzoDeMattei/GePpeTto") text_generator = pipeline('text-generation', model=model, tokenizer=tokenizer) prompts = [ "Wikipedia Geppetto", "Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso"] samples_outputs = text_generator( prompts, do_sample=True, max_length=50, top_k=50, top_p=0.95, num_return_sequences=3 ) for i, sample_outputs in enumerate(samples_outputs): print(100 * '-') print("Prompt:", prompts[i]) for sample_output in sample_outputs: print("Sample:", sample_output['generated_text']) print() ``` Output is, ``` ---------------------------------------------------------------------------------------------------- Prompt: Wikipedia Geppetto Sample: Wikipedia Geppetto rosso (film 1920) Geppetto rosso ("The Smokes in the Black") è un film muto del 1920 diretto da Henry H. Leonard. Il film fu prodotto dalla Selig Poly Sample: Wikipedia Geppetto Geppetto ("Geppetto" in piemontese) è un comune italiano di 978 abitanti della provincia di Cuneo in Piemonte. L'abitato, che si trova nel versante valtellinese, si sviluppa nella Sample: Wikipedia Geppetto di Natale (romanzo) Geppetto di Natale è un romanzo di Mario Caiano, pubblicato nel 2012. ---------------------------------------------------------------------------------------------------- Prompt: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso. Il burattino riesce a scappare. Dopo aver trovato un prezioso sacchetto si reca Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso, e l'unico che lo possiede, ma, di fronte a tutte queste prove Sample: Maestro Ciliegia regala il pezzo di legno al suo amico Geppetto, il quale lo prende per fabbricarsi un burattino maraviglioso: - A voi gli occhi, le guance! A voi il mio pezzo! ``` ## Citation Please use the following bibtex entry: ``` @misc{mattei2020geppetto, title={GePpeTto Carves Italian into a Language Model}, author={Lorenzo De Mattei and Michele Cafagna and Felice Dell'Orletta and Malvina Nissim and Marco Guerini}, year={2020}, eprint={2004.14253}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ## References Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed webcrawled corpora. Language resources and evaluation, 43(3):209–226.
{}
RichardErkhov/LorenzoDeMattei_-_GePpeTto-gguf
null
[ "gguf", "arxiv:2004.14253", "region:us" ]
null
2024-04-17T10:36:23+00:00
[ "2004.14253" ]
[]
TAGS #gguf #arxiv-2004.14253 #region-us
Quantization made by Richard Erkhov. Github Discord Request more models GePpeTto - GGUF * Model creator: URL * Original model: URL Name: GePpeTto.Q2\_K.gguf, Quant method: Q2\_K, Size: 0.06GB Name: GePpeTto.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 0.06GB Name: GePpeTto.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 0.06GB Name: GePpeTto.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 0.06GB Name: GePpeTto.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 0.07GB Name: GePpeTto.Q3\_K.gguf, Quant method: Q3\_K, Size: 0.07GB Name: GePpeTto.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 0.07GB Name: GePpeTto.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 0.07GB Name: GePpeTto.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 0.07GB Name: GePpeTto.Q4\_0.gguf, Quant method: Q4\_0, Size: 0.08GB Name: GePpeTto.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 0.08GB Name: GePpeTto.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 0.08GB Name: GePpeTto.Q4\_K.gguf, Quant method: Q4\_K, Size: 0.08GB Name: GePpeTto.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 0.08GB Name: GePpeTto.Q4\_1.gguf, Quant method: Q4\_1, Size: 0.08GB Name: GePpeTto.Q5\_0.gguf, Quant method: Q5\_0, Size: 0.09GB Name: GePpeTto.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 0.09GB Name: GePpeTto.Q5\_K.gguf, Quant method: Q5\_K, Size: 0.09GB Name: GePpeTto.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 0.09GB Name: GePpeTto.Q5\_1.gguf, Quant method: Q5\_1, Size: 0.1GB Name: GePpeTto.Q6\_K.gguf, Quant method: Q6\_K, Size: 0.1GB Original model description: --------------------------- language: it ------------ GePpeTto GPT2 Model 🇮🇹 ====================== Pretrained GPT2 117M model for Italian. You can find further details in the paper: Lorenzo De Mattei, Michele Cafagna, Felice Dell’Orletta, Malvina Nissim, Marco Guerini "GePpeTto Carves Italian into a Language Model", arXiv preprint. Pdf available at: URL Pretraining Corpus ------------------ The pretraining set comprises two main sources. The first one is a dump of Italian Wikipedia (November 2019), consisting of 2.8GB of text. The second one is the ItWac corpus (Baroni et al., 2009), which amounts to 11GB of web texts. This collection provides a mix of standard and less standard Italian, on a rather wide chronological span, with older texts than the Wikipedia dump (the latter stretches only to the late 2000s). Pretraining details ------------------- This model was trained using GPT2's Hugging Face implemenation on 4 NVIDIA Tesla T4 GPU for 620k steps. Training parameters: * GPT-2 small configuration * vocabulary size: 30k * Batch size: 32 * Block size: 100 * Adam Optimizer * Initial learning rate: 5e-5 * Warm up steps: 10k Perplexity scores ----------------- For further details, qualitative analysis and human evaluation check out: URL Load Pretrained Model --------------------- You can use this model by installing Huggingface library 'transformers'. And you can use it directly by initializing it like this: Example using GPT2LMHeadModel ----------------------------- Output is, Please use the following bibtex entry: References ---------- Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed webcrawled corpora. Language resources and evaluation, 43(3):209–226.
[]
[ "TAGS\n#gguf #arxiv-2004.14253 #region-us \n" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505_COQE_viT5_train_Instruction0_POSAL This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_Instruction0_POSAL", "results": []}]}
ThuyNT/CS505_COQE_viT5_train_Instruction0_POSAL
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:VietAI/vit5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T10:36:47+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# CS505_COQE_viT5_train_Instruction0_POSAL This model is a fine-tuned version of VietAI/vit5-large on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# CS505_COQE_viT5_train_Instruction0_POSAL\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/vit5-large #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# CS505_COQE_viT5_train_Instruction0_POSAL\n\nThis model is a fine-tuned version of VietAI/vit5-large on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 32\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_hh_usp1_400 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.4291 - Rewards/chosen: -2.1852 - Rewards/rejected: -10.4536 - Rewards/accuracies: 0.6900 - Rewards/margins: 8.2684 - Logps/rejected: -125.6639 - Logps/chosen: -112.8688 - Logits/rejected: -0.9672 - Logits/chosen: -0.9637 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.0025 | 4.0 | 100 | 1.8113 | -1.7152 | -5.1212 | 0.6100 | 3.4059 | -119.7389 | -112.3466 | -0.1554 | -0.1598 | | 0.1942 | 8.0 | 200 | 3.6090 | -1.4379 | -8.2063 | 0.6100 | 6.7684 | -123.1668 | -112.0384 | -1.0994 | -1.1187 | | 0.0502 | 12.0 | 300 | 3.3229 | -9.0906 | -16.5854 | 0.6200 | 7.4948 | -132.4769 | -120.5415 | -0.9988 | -1.0079 | | 0.0 | 16.0 | 400 | 3.4296 | -2.1656 | -10.3972 | 0.6900 | 8.2316 | -125.6012 | -112.8470 | -0.9657 | -0.9623 | | 0.0 | 20.0 | 500 | 3.4471 | -2.1796 | -10.4172 | 0.7100 | 8.2376 | -125.6234 | -112.8626 | -0.9676 | -0.9637 | | 0.0 | 24.0 | 600 | 3.4031 | -2.1735 | -10.4669 | 0.7000 | 8.2933 | -125.6786 | -112.8558 | -0.9675 | -0.9640 | | 0.0 | 28.0 | 700 | 3.4346 | -2.1542 | -10.4272 | 0.7000 | 8.2730 | -125.6345 | -112.8343 | -0.9673 | -0.9639 | | 0.0 | 32.0 | 800 | 3.4246 | -2.1606 | -10.4103 | 0.6900 | 8.2497 | -125.6157 | -112.8415 | -0.9675 | -0.9642 | | 0.0 | 36.0 | 900 | 3.4315 | -2.1805 | -10.4501 | 0.7000 | 8.2696 | -125.6599 | -112.8635 | -0.9674 | -0.9639 | | 0.0 | 40.0 | 1000 | 3.4291 | -2.1852 | -10.4536 | 0.6900 | 8.2684 | -125.6639 | -112.8688 | -0.9672 | -0.9637 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_usp1_400", "results": []}]}
guoyu-zhang/model_hh_usp1_400
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
null
2024-04-17T10:37:45+00:00
[]
[]
TAGS #peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us
model\_hh\_usp1\_400 ==================== This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 3.4291 * Rewards/chosen: -2.1852 * Rewards/rejected: -10.4536 * Rewards/accuracies: 0.6900 * Rewards/margins: 8.2684 * Logps/rejected: -125.6639 * Logps/chosen: -112.8688 * Logits/rejected: -0.9672 * Logits/chosen: -0.9637 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 4 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 100 * training\_steps: 1000 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
multimolecule/rna
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T10:38:00+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="AmnaShafaq/AA", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "AA", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.50 +/- 2.75", "name": "mean_reward", "verified": false}]}]}]}
AmnaShafaq/AA
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-17T10:39:43+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) distilgpt2-base-pretrained-he - bnb 4bits - Model creator: https://huggingface.co/Norod78/ - Original model: https://huggingface.co/Norod78/distilgpt2-base-pretrained-he/ Original model description: --- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "האיש האחרון עלי אדמות ישב לבד בחדרו כשלפתע נשמעה נקישה" - text: "שלום, קרואים לי" - text: "הארי פוטר חייך חיוך נבוך" - text: "החתול שלך מאוד חמוד ו" license: mit --- # distilgpt2-base-pretrained-he A tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. Then was further fine-tuned on GPU. ## Dataset ### oscar (unshuffled deduplicated he) - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ### CC-100 (he) - [HomePage](https://data.statmt.org/cc-100/) This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. ### Misc * Hebrew Twitter * Wikipedia * Various other sources ## Training * Done on a TPUv3-8 VM using [Huggingface's clm-flax example script](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py) <BR> * I have made a list of items which might make it easier for other to use this script. The list was posted to [This discussion forum](https://discuss.huggingface.co/t/ideas-for-beginner-friendlier-tpu-vm-clm-training/8351) * Further training was performed on GPU ## Usage #### Simple usage sample code ```python from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline def main(): model_name="Norod78/distilgpt2-base-pretrained-he" prompt_text = "שלום, קוראים לי" generated_max_length = 192 print("Loading model...") model = AutoModelForCausalLM.from_pretrained(model_name) print('Loading Tokenizer...') tokenizer = AutoTokenizer.from_pretrained(model_name) text_generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer) print("Generating text...") result = text_generator(prompt_text, num_return_sequences=1, batch_size=1, do_sample=True, top_k=40, top_p=0.92, temperature = 1, repetition_penalty=5.0, max_length = generated_max_length) print("result = " + str(result)) if __name__ == '__main__': main() ```
{}
RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-4bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-17T10:40:13+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models distilgpt2-base-pretrained-he - bnb 4bits - Model creator: URL - Original model: URL Original model description: --- language: he thumbnail: URL widget: - text: "האיש האחרון עלי אדמות ישב לבד בחדרו כשלפתע נשמעה נקישה" - text: "שלום, קרואים לי" - text: "הארי פוטר חייך חיוך נבוך" - text: "החתול שלך מאוד חמוד ו" license: mit --- # distilgpt2-base-pretrained-he A tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program. Then was further fine-tuned on GPU. ## Dataset ### oscar (unshuffled deduplicated he) - Homepage | Dataset Permalink The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ### CC-100 (he) - HomePage This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. ### Misc * Hebrew Twitter * Wikipedia * Various other sources ## Training * Done on a TPUv3-8 VM using Huggingface's clm-flax example script <BR> * I have made a list of items which might make it easier for other to use this script. The list was posted to This discussion forum * Further training was performed on GPU ## Usage #### Simple usage sample code
[ "# distilgpt2-base-pretrained-he\n\nA tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program. Then was further fine-tuned on GPU.", "## Dataset", "### oscar (unshuffled deduplicated he) - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.", "### CC-100 (he) - HomePage\n\nThis corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository.", "### Misc\n* Hebrew Twitter\n* Wikipedia\n* Various other sources", "## Training\n\n* Done on a TPUv3-8 VM using Huggingface's clm-flax example script <BR>\n* I have made a list of items which might make it easier for other to use this script. The list was posted to This discussion forum\n* Further training was performed on GPU", "## Usage", "#### Simple usage sample code" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# distilgpt2-base-pretrained-he\n\nA tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program. Then was further fine-tuned on GPU.", "## Dataset", "### oscar (unshuffled deduplicated he) - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.", "### CC-100 (he) - HomePage\n\nThis corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository.", "### Misc\n* Hebrew Twitter\n* Wikipedia\n* Various other sources", "## Training\n\n* Done on a TPUv3-8 VM using Huggingface's clm-flax example script <BR>\n* I have made a list of items which might make it easier for other to use this script. The list was posted to This discussion forum\n* Further training was performed on GPU", "## Usage", "#### Simple usage sample code" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) hebrew-bad_wiki-gpt_neo-tiny - bnb 4bits - Model creator: https://huggingface.co/Norod78/ - Original model: https://huggingface.co/Norod78/hebrew-bad_wiki-gpt_neo-tiny/ Original model description: --- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "מתמטיקה:" - text: "עליית המכונות" - text: "ויקיפדיה העברית" - text: "האירוויזיון הוא" - text: "דוד בן-גוריון היה" license: mit --- # hebrew-bad_wiki-gpt_neo-tiny ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details **Model Description:** The model developer notes that the model is > Hebrew nonsense generation model which produces really bad wiki-abstract text. - **Developed by:** [Doron Adler](https://github.com/Norod) - **Model Type:** Text Generation - **Language(s):** Hebrew - **License:** MIT - **Resources for more information:** - [GitHub Repo](https://github.com/Norod/hebrew-gpt_neo) - [HuggingFace Space](https://huggingface.co/spaces/Norod78/Hebrew-GPT-Neo-Small) ## Uses #### Direct Use This model can be used for text generation. #### Misuse and Out-of-scope Use ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## Training #### Training Data [Hebrew Wikipedia Dump](https://dumps.wikimedia.org/hewiki/latest/) (hewiki abstract) from May 2020 #### Training Procedure This model was fined tuned upon [hebrew-gpt_neo-tiny](https://huggingface.co/Norod78/hebrew-gpt_neo-tiny) which was previously trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Fine-tuning on the wiki-absract text was done using [@minimaxir](https://twitter.com/minimaxir)'s [aitextgen](https://github.com/minimaxir/aitextgen). ## Evaluation #### Configs Model configs for the hebrew-gpt_neo-tiny is available on the [hebrew-gpt_neo model github](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-tiny/configs) * **Activation Function:** gelu * **Number_Head:** 12 * **Number_Vocab:** 50257 * **Train batch size:** 250 * **Eval batch size:** 64 * **Predict batch size:** 1 ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf). - **Hardware Type:** [More information needed] - **Hours used:** Unknown - **Cloud Provider:** GCP tpu-v8s - **Compute Region:** europe-west4 - **Carbon Emitted:** [More information needed] ## How to Get Started With the Model A Google Colab Notebook is also available [here](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-tiny/Norod78_hebrew_gpt_neo_tiny_Colab.ipynb) ​​ ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-bad_wiki-gpt_neo-tiny") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-bad_wiki-gpt_neo-tiny") ```
{}
RichardErkhov/Norod78_-_hebrew-bad_wiki-gpt_neo-tiny-4bits
null
[ "transformers", "safetensors", "gpt_neo", "text-generation", "arxiv:1910.09700", "arxiv:2105.09680", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-04-17T10:40:41+00:00
[ "1910.09700", "2105.09680" ]
[]
TAGS #transformers #safetensors #gpt_neo #text-generation #arxiv-1910.09700 #arxiv-2105.09680 #autotrain_compatible #endpoints_compatible #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models hebrew-bad_wiki-gpt_neo-tiny - bnb 4bits - Model creator: URL - Original model: URL Original model description: --- language: he thumbnail: URL widget: - text: "מתמטיקה:" - text: "עליית המכונות" - text: "ויקיפדיה העברית" - text: "האירוויזיון הוא" - text: "דוד בן-גוריון היה" license: mit --- # hebrew-bad_wiki-gpt_neo-tiny ## Table of Contents - Model Details - Uses - Risks, Limitations and Biases - Training - Evaluation - Environmental Impact - How to Get Started With the Model ## Model Details Model Description: The model developer notes that the model is > Hebrew nonsense generation model which produces really bad wiki-abstract text. - Developed by: Doron Adler - Model Type: Text Generation - Language(s): Hebrew - License: MIT - Resources for more information: - GitHub Repo - HuggingFace Space ## Uses #### Direct Use This model can be used for text generation. #### Misuse and Out-of-scope Use ## Risks, Limitations and Biases CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). ## Training #### Training Data Hebrew Wikipedia Dump (hewiki abstract) from May 2020 #### Training Procedure This model was fined tuned upon hebrew-gpt_neo-tiny which was previously trained using EleutherAI's gpt-neo. Fine-tuning on the wiki-absract text was done using @minimaxir's aitextgen. ## Evaluation #### Configs Model configs for the hebrew-gpt_neo-tiny is available on the hebrew-gpt_neo model github * Activation Function: gelu * Number_Head: 12 * Number_Vocab: 50257 * Train batch size: 250 * Eval batch size: 64 * Predict batch size: 1 ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper. - Hardware Type: [More information needed] - Hours used: Unknown - Cloud Provider: GCP tpu-v8s - Compute Region: europe-west4 - Carbon Emitted: [More information needed] ## How to Get Started With the Model A Google Colab Notebook is also available here ​​
[ "# hebrew-bad_wiki-gpt_neo-tiny", "## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- How to Get Started With the Model", "## Model Details\nModel Description:\n\nThe model developer notes that the model is \n> Hebrew nonsense generation model which produces really bad wiki-abstract text. \n\n\n- Developed by: Doron Adler\n- Model Type: Text Generation\n- Language(s): Hebrew\n- License: MIT\n- Resources for more information:\n- GitHub Repo\n- HuggingFace Space", "## Uses", "#### Direct Use\n\nThis model can be used for text generation.", "#### Misuse and Out-of-scope Use", "## Risks, Limitations and Biases\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).", "## Training", "#### Training Data\n Hebrew Wikipedia Dump (hewiki abstract) from May 2020", "#### Training Procedure\n\n\nThis model was fined tuned upon hebrew-gpt_neo-tiny which was previously trained using EleutherAI's gpt-neo. \n\nFine-tuning on the wiki-absract text was done using @minimaxir's aitextgen.", "## Evaluation", "#### Configs\n\nModel configs for the hebrew-gpt_neo-tiny is available on the hebrew-gpt_neo model github \n\n* Activation Function: gelu\n* Number_Head: 12\n* Number_Vocab: 50257\n* Train batch size: 250\n* Eval batch size: 64\n* Predict batch size: 1", "## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper.\n\n\n- Hardware Type: [More information needed]\n\n- Hours used: Unknown\n\n- Cloud Provider: GCP tpu-v8s\n\n- Compute Region: europe-west4\n\n- Carbon Emitted: [More information needed]", "## How to Get Started With the Model\n\nA Google Colab Notebook is also available here\n\n\n​​" ]
[ "TAGS\n#transformers #safetensors #gpt_neo #text-generation #arxiv-1910.09700 #arxiv-2105.09680 #autotrain_compatible #endpoints_compatible #4-bit #region-us \n", "# hebrew-bad_wiki-gpt_neo-tiny", "## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- How to Get Started With the Model", "## Model Details\nModel Description:\n\nThe model developer notes that the model is \n> Hebrew nonsense generation model which produces really bad wiki-abstract text. \n\n\n- Developed by: Doron Adler\n- Model Type: Text Generation\n- Language(s): Hebrew\n- License: MIT\n- Resources for more information:\n- GitHub Repo\n- HuggingFace Space", "## Uses", "#### Direct Use\n\nThis model can be used for text generation.", "#### Misuse and Out-of-scope Use", "## Risks, Limitations and Biases\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).", "## Training", "#### Training Data\n Hebrew Wikipedia Dump (hewiki abstract) from May 2020", "#### Training Procedure\n\n\nThis model was fined tuned upon hebrew-gpt_neo-tiny which was previously trained using EleutherAI's gpt-neo. \n\nFine-tuning on the wiki-absract text was done using @minimaxir's aitextgen.", "## Evaluation", "#### Configs\n\nModel configs for the hebrew-gpt_neo-tiny is available on the hebrew-gpt_neo model github \n\n* Activation Function: gelu\n* Number_Head: 12\n* Number_Vocab: 50257\n* Train batch size: 250\n* Eval batch size: 64\n* Predict batch size: 1", "## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper.\n\n\n- Hardware Type: [More information needed]\n\n- Hours used: Unknown\n\n- Cloud Provider: GCP tpu-v8s\n\n- Compute Region: europe-west4\n\n- Carbon Emitted: [More information needed]", "## How to Get Started With the Model\n\nA Google Colab Notebook is also available here\n\n\n​​" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # interpro_bert3 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4142 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 200 - eval_batch_size: 128 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 1600 - total_eval_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 1.2045 | 1.0 | 18425 | 1.1198 | | 0.8936 | 2.0 | 36850 | 0.8563 | | 0.7736 | 3.0 | 55275 | 0.7475 | | 0.6946 | 4.0 | 73700 | 0.6877 | | 0.8083 | 5.0 | 92125 | 0.7509 | | 0.677 | 6.0 | 110550 | 0.6578 | | 0.778 | 7.0 | 128975 | 0.7306 | | 0.6017 | 8.0 | 147400 | 0.5994 | | 0.5646 | 9.0 | 165825 | 0.5704 | | 0.5352 | 10.0 | 184250 | 0.5479 | | 0.532 | 11.0 | 202675 | 0.5496 | | 0.495 | 12.0 | 221100 | 0.5198 | | 0.4714 | 13.0 | 239525 | 0.4971 | | 0.4497 | 14.0 | 257950 | 0.4797 | | 0.4312 | 15.0 | 276375 | 0.4670 | | 0.4131 | 16.0 | 294800 | 0.4494 | | 0.4001 | 17.0 | 313225 | 0.4411 | | 0.3828 | 18.0 | 331650 | 0.4316 | | 0.3665 | 19.0 | 350075 | 0.4201 | | 0.3592 | 20.0 | 368500 | 0.4142 | ### Framework versions - Transformers 4.39.2 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "model-index": [{"name": "interpro_bert3", "results": []}]}
Dauka-transformers/interpro_bert3
null
[ "transformers", "tensorboard", "safetensors", "bert", "fill-mask", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T10:40:46+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us
interpro\_bert3 =============== This model is a fine-tuned version of [](URL on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.4142 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0001 * train\_batch\_size: 200 * eval\_batch\_size: 128 * seed: 42 * distributed\_type: multi-GPU * num\_devices: 8 * total\_train\_batch\_size: 1600 * total\_eval\_batch\_size: 1024 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 20 ### Training results ### Framework versions * Transformers 4.39.2 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 200\n* eval\\_batch\\_size: 128\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 1600\n* total\\_eval\\_batch\\_size: 1024\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #bert #fill-mask #generated_from_trainer #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 200\n* eval\\_batch\\_size: 128\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 8\n* total\\_train\\_batch\\_size: 1600\n* total\\_eval\\_batch\\_size: 1024\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 20", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.2\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) distilgpt2-base-pretrained-he - bnb 8bits - Model creator: https://huggingface.co/Norod78/ - Original model: https://huggingface.co/Norod78/distilgpt2-base-pretrained-he/ Original model description: --- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "האיש האחרון עלי אדמות ישב לבד בחדרו כשלפתע נשמעה נקישה" - text: "שלום, קרואים לי" - text: "הארי פוטר חייך חיוך נבוך" - text: "החתול שלך מאוד חמוד ו" license: mit --- # distilgpt2-base-pretrained-he A tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. Then was further fine-tuned on GPU. ## Dataset ### oscar (unshuffled deduplicated he) - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ### CC-100 (he) - [HomePage](https://data.statmt.org/cc-100/) This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. ### Misc * Hebrew Twitter * Wikipedia * Various other sources ## Training * Done on a TPUv3-8 VM using [Huggingface's clm-flax example script](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py) <BR> * I have made a list of items which might make it easier for other to use this script. The list was posted to [This discussion forum](https://discuss.huggingface.co/t/ideas-for-beginner-friendlier-tpu-vm-clm-training/8351) * Further training was performed on GPU ## Usage #### Simple usage sample code ```python from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline def main(): model_name="Norod78/distilgpt2-base-pretrained-he" prompt_text = "שלום, קוראים לי" generated_max_length = 192 print("Loading model...") model = AutoModelForCausalLM.from_pretrained(model_name) print('Loading Tokenizer...') tokenizer = AutoTokenizer.from_pretrained(model_name) text_generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer) print("Generating text...") result = text_generator(prompt_text, num_return_sequences=1, batch_size=1, do_sample=True, top_k=40, top_p=0.92, temperature = 1, repetition_penalty=5.0, max_length = generated_max_length) print("result = " + str(result)) if __name__ == '__main__': main() ```
{}
RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-8bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-17T10:40:49+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models distilgpt2-base-pretrained-he - bnb 8bits - Model creator: URL - Original model: URL Original model description: --- language: he thumbnail: URL widget: - text: "האיש האחרון עלי אדמות ישב לבד בחדרו כשלפתע נשמעה נקישה" - text: "שלום, קרואים לי" - text: "הארי פוטר חייך חיוך נבוך" - text: "החתול שלך מאוד חמוד ו" license: mit --- # distilgpt2-base-pretrained-he A tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program. Then was further fine-tuned on GPU. ## Dataset ### oscar (unshuffled deduplicated he) - Homepage | Dataset Permalink The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ### CC-100 (he) - HomePage This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. ### Misc * Hebrew Twitter * Wikipedia * Various other sources ## Training * Done on a TPUv3-8 VM using Huggingface's clm-flax example script <BR> * I have made a list of items which might make it easier for other to use this script. The list was posted to This discussion forum * Further training was performed on GPU ## Usage #### Simple usage sample code
[ "# distilgpt2-base-pretrained-he\n\nA tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program. Then was further fine-tuned on GPU.", "## Dataset", "### oscar (unshuffled deduplicated he) - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.", "### CC-100 (he) - HomePage\n\nThis corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository.", "### Misc\n* Hebrew Twitter\n* Wikipedia\n* Various other sources", "## Training\n\n* Done on a TPUv3-8 VM using Huggingface's clm-flax example script <BR>\n* I have made a list of items which might make it easier for other to use this script. The list was posted to This discussion forum\n* Further training was performed on GPU", "## Usage", "#### Simple usage sample code" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# distilgpt2-base-pretrained-he\n\nA tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program. Then was further fine-tuned on GPU.", "## Dataset", "### oscar (unshuffled deduplicated he) - Homepage | Dataset Permalink\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.", "### CC-100 (he) - HomePage\n\nThis corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository.", "### Misc\n* Hebrew Twitter\n* Wikipedia\n* Various other sources", "## Training\n\n* Done on a TPUv3-8 VM using Huggingface's clm-flax example script <BR>\n* I have made a list of items which might make it easier for other to use this script. The list was posted to This discussion forum\n* Further training was performed on GPU", "## Usage", "#### Simple usage sample code" ]
text-classification
bertopic
# transformers_issues_topics This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("mark230271/transformers_issues_topics") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 30 * Number of training documents: 9000 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | tokenizer - bert - tokenizers - pytorch - tensorflow | 11 | -1_tokenizer_bert_tokenizers_pytorch | | 0 | tokenizer - tokenizers - tokenization - berttokenizer - bart | 2376 | 0_tokenizer_tokenizers_tokenization_berttokenizer | | 1 | cuda - gpt2 - gpt - gpus - gpu | 1879 | 1_cuda_gpt2_gpt_gpus | | 2 | modelcard - modelcards - card - model - models | 735 | 2_modelcard_modelcards_card_model | | 3 | transformerscli - transformers - transformer - transformerxl - importerror | 412 | 3_transformerscli_transformers_transformer_transformerxl | | 4 | typeerror - attributeerror - valueerror - error - errors | 385 | 4_typeerror_attributeerror_valueerror_error | | 5 | trainertrain - trainer - trainerevaluate - trainers - training | 330 | 5_trainertrain_trainer_trainerevaluate_trainers | | 6 | seq2seq - seq2seqtrainer - s2s - runseq2seq - seq2seqdataset | 319 | 6_seq2seq_seq2seqtrainer_s2s_runseq2seq | | 7 | typos - typo - fix - correction - fixed | 306 | 7_typos_typo_fix_correction | | 8 | ci - testing - test - tests - circleci | 282 | 8_ci_testing_test_tests | | 9 | readmemd - readmetxt - readme - file - camembertbasereadmemd | 255 | 9_readmemd_readmetxt_readme_file | | 10 | t5 - t5model - tf - t5base - t5large | 255 | 10_t5_t5model_tf_t5base | | 11 | generationbeamsearchpy - beamsearch - groupbeamsearch - beam - search | 218 | 11_generationbeamsearchpy_beamsearch_groupbeamsearch_beam | | 12 | flax - distilbertmodel - flaubert - deberta - model | 185 | 12_flax_distilbertmodel_flaubert_deberta | | 13 | ner - pipeline - pipelines - nerpipeline - fillmaskpipeline | 177 | 13_ner_pipeline_pipelines_nerpipeline | | 14 | questionansweringpipeline - tfalbertforquestionanswering - questionanswering - distilbertforquestionanswering - answering | 161 | 14_questionansweringpipeline_tfalbertforquestionanswering_questionanswering_distilbertforquestionanswering | | 15 | huggingfacetransformers - huggingface - hugging - gluepy - gluebenchmarkcom | 133 | 15_huggingfacetransformers_huggingface_hugging_gluepy | | 16 | onnx - onnxonnxruntime - onnxexport - 04onnxexport - 04onnxexportipynb | 130 | 16_onnx_onnxonnxruntime_onnxexport_04onnxexport | | 17 | labelsmoothednllloss - labelsmoothingfactor - label - labels - labelsmoothing | 96 | 17_labelsmoothednllloss_labelsmoothingfactor_label_labels | | 18 | longformer - longformers - longform - longformerlayer - longformermodel | 73 | 18_longformer_longformers_longform_longformerlayer | | 19 | configpath - configs - config - configuration - modelconfigs | 59 | 19_configpath_configs_config_configuration | | 20 | wandbproject - wandb - sagemaker - sagemakertrainer - wandbcallback | 45 | 20_wandbproject_wandb_sagemaker_sagemakertrainer | | 21 | cachedir - cache - cachedpath - caching - cached | 33 | 21_cachedir_cache_cachedpath_caching | | 22 | notebook - notebooks - community - colab - t5 | 33 | 22_notebook_notebooks_community_colab | | 23 | electra - electrapretrainedmodel - electraformaskedlm - electraformultiplechoice - electrafortokenclassification | 30 | 23_electra_electrapretrainedmodel_electraformaskedlm_electraformultiplechoice | | 24 | layoutlm - layout - layoutlmtokenizer - layoutlmbaseuncased - tf | 24 | 24_layoutlm_layout_layoutlmtokenizer_layoutlmbaseuncased | | 25 | isort - blackisortflake8 - github - repo - version | 18 | 25_isort_blackisortflake8_github_repo | | 26 | pplm - pr - deprecated - variable - ppl | 14 | 26_pplm_pr_deprecated_variable | | 27 | indexerror - index - missingindex - indices - runtimeerror | 14 | 27_indexerror_index_missingindex_indices | | 28 | ga - fork - forks - forked - push | 12 | 28_ga_fork_forks_forked | </details> ## Training hyperparameters * calculate_probabilities: False * language: english * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: 30 * seed_topic_list: None * top_n_words: 10 * verbose: True * zeroshot_min_similarity: 0.7 * zeroshot_topic_list: None ## Framework versions * Numpy: 1.25.2 * HDBSCAN: 0.8.33 * UMAP: 0.5.6 * Pandas: 2.0.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.6.1 * Transformers: 4.38.2 * Numba: 0.58.1 * Plotly: 5.15.0 * Python: 3.10.12
{"library_name": "bertopic", "tags": ["bertopic"], "pipeline_tag": "text-classification"}
mark230271/transformers_issues_topics
null
[ "bertopic", "text-classification", "region:us" ]
null
2024-04-17T10:40:51+00:00
[]
[]
TAGS #bertopic #text-classification #region-us
transformers\_issues\_topics ============================ This is a BERTopic model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. Usage ----- To use this model, please install BERTopic: You can use the model as follows: Topic overview -------------- * Number of topics: 30 * Number of training documents: 9000 Click here for an overview of all topics. Training hyperparameters ------------------------ * calculate\_probabilities: False * language: english * low\_memory: False * min\_topic\_size: 10 * n\_gram\_range: (1, 1) * nr\_topics: 30 * seed\_topic\_list: None * top\_n\_words: 10 * verbose: True * zeroshot\_min\_similarity: 0.7 * zeroshot\_topic\_list: None Framework versions ------------------ * Numpy: 1.25.2 * HDBSCAN: 0.8.33 * UMAP: 0.5.6 * Pandas: 2.0.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.6.1 * Transformers: 4.38.2 * Numba: 0.58.1 * Plotly: 5.15.0 * Python: 3.10.12
[]
[ "TAGS\n#bertopic #text-classification #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) hebrew-bad_wiki-gpt_neo-tiny - bnb 8bits - Model creator: https://huggingface.co/Norod78/ - Original model: https://huggingface.co/Norod78/hebrew-bad_wiki-gpt_neo-tiny/ Original model description: --- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "מתמטיקה:" - text: "עליית המכונות" - text: "ויקיפדיה העברית" - text: "האירוויזיון הוא" - text: "דוד בן-גוריון היה" license: mit --- # hebrew-bad_wiki-gpt_neo-tiny ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details **Model Description:** The model developer notes that the model is > Hebrew nonsense generation model which produces really bad wiki-abstract text. - **Developed by:** [Doron Adler](https://github.com/Norod) - **Model Type:** Text Generation - **Language(s):** Hebrew - **License:** MIT - **Resources for more information:** - [GitHub Repo](https://github.com/Norod/hebrew-gpt_neo) - [HuggingFace Space](https://huggingface.co/spaces/Norod78/Hebrew-GPT-Neo-Small) ## Uses #### Direct Use This model can be used for text generation. #### Misuse and Out-of-scope Use ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## Training #### Training Data [Hebrew Wikipedia Dump](https://dumps.wikimedia.org/hewiki/latest/) (hewiki abstract) from May 2020 #### Training Procedure This model was fined tuned upon [hebrew-gpt_neo-tiny](https://huggingface.co/Norod78/hebrew-gpt_neo-tiny) which was previously trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Fine-tuning on the wiki-absract text was done using [@minimaxir](https://twitter.com/minimaxir)'s [aitextgen](https://github.com/minimaxir/aitextgen). ## Evaluation #### Configs Model configs for the hebrew-gpt_neo-tiny is available on the [hebrew-gpt_neo model github](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-tiny/configs) * **Activation Function:** gelu * **Number_Head:** 12 * **Number_Vocab:** 50257 * **Train batch size:** 250 * **Eval batch size:** 64 * **Predict batch size:** 1 ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf). - **Hardware Type:** [More information needed] - **Hours used:** Unknown - **Cloud Provider:** GCP tpu-v8s - **Compute Region:** europe-west4 - **Carbon Emitted:** [More information needed] ## How to Get Started With the Model A Google Colab Notebook is also available [here](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-tiny/Norod78_hebrew_gpt_neo_tiny_Colab.ipynb) ​​ ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-bad_wiki-gpt_neo-tiny") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-bad_wiki-gpt_neo-tiny") ```
{}
RichardErkhov/Norod78_-_hebrew-bad_wiki-gpt_neo-tiny-8bits
null
[ "transformers", "safetensors", "gpt_neo", "text-generation", "arxiv:1910.09700", "arxiv:2105.09680", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-04-17T10:41:09+00:00
[ "1910.09700", "2105.09680" ]
[]
TAGS #transformers #safetensors #gpt_neo #text-generation #arxiv-1910.09700 #arxiv-2105.09680 #autotrain_compatible #endpoints_compatible #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models hebrew-bad_wiki-gpt_neo-tiny - bnb 8bits - Model creator: URL - Original model: URL Original model description: --- language: he thumbnail: URL widget: - text: "מתמטיקה:" - text: "עליית המכונות" - text: "ויקיפדיה העברית" - text: "האירוויזיון הוא" - text: "דוד בן-גוריון היה" license: mit --- # hebrew-bad_wiki-gpt_neo-tiny ## Table of Contents - Model Details - Uses - Risks, Limitations and Biases - Training - Evaluation - Environmental Impact - How to Get Started With the Model ## Model Details Model Description: The model developer notes that the model is > Hebrew nonsense generation model which produces really bad wiki-abstract text. - Developed by: Doron Adler - Model Type: Text Generation - Language(s): Hebrew - License: MIT - Resources for more information: - GitHub Repo - HuggingFace Space ## Uses #### Direct Use This model can be used for text generation. #### Misuse and Out-of-scope Use ## Risks, Limitations and Biases CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes. Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)). ## Training #### Training Data Hebrew Wikipedia Dump (hewiki abstract) from May 2020 #### Training Procedure This model was fined tuned upon hebrew-gpt_neo-tiny which was previously trained using EleutherAI's gpt-neo. Fine-tuning on the wiki-absract text was done using @minimaxir's aitextgen. ## Evaluation #### Configs Model configs for the hebrew-gpt_neo-tiny is available on the hebrew-gpt_neo model github * Activation Function: gelu * Number_Head: 12 * Number_Vocab: 50257 * Train batch size: 250 * Eval batch size: 64 * Predict batch size: 1 ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper. - Hardware Type: [More information needed] - Hours used: Unknown - Cloud Provider: GCP tpu-v8s - Compute Region: europe-west4 - Carbon Emitted: [More information needed] ## How to Get Started With the Model A Google Colab Notebook is also available here ​​
[ "# hebrew-bad_wiki-gpt_neo-tiny", "## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- How to Get Started With the Model", "## Model Details\nModel Description:\n\nThe model developer notes that the model is \n> Hebrew nonsense generation model which produces really bad wiki-abstract text. \n\n\n- Developed by: Doron Adler\n- Model Type: Text Generation\n- Language(s): Hebrew\n- License: MIT\n- Resources for more information:\n- GitHub Repo\n- HuggingFace Space", "## Uses", "#### Direct Use\n\nThis model can be used for text generation.", "#### Misuse and Out-of-scope Use", "## Risks, Limitations and Biases\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).", "## Training", "#### Training Data\n Hebrew Wikipedia Dump (hewiki abstract) from May 2020", "#### Training Procedure\n\n\nThis model was fined tuned upon hebrew-gpt_neo-tiny which was previously trained using EleutherAI's gpt-neo. \n\nFine-tuning on the wiki-absract text was done using @minimaxir's aitextgen.", "## Evaluation", "#### Configs\n\nModel configs for the hebrew-gpt_neo-tiny is available on the hebrew-gpt_neo model github \n\n* Activation Function: gelu\n* Number_Head: 12\n* Number_Vocab: 50257\n* Train batch size: 250\n* Eval batch size: 64\n* Predict batch size: 1", "## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper.\n\n\n- Hardware Type: [More information needed]\n\n- Hours used: Unknown\n\n- Cloud Provider: GCP tpu-v8s\n\n- Compute Region: europe-west4\n\n- Carbon Emitted: [More information needed]", "## How to Get Started With the Model\n\nA Google Colab Notebook is also available here\n\n\n​​" ]
[ "TAGS\n#transformers #safetensors #gpt_neo #text-generation #arxiv-1910.09700 #arxiv-2105.09680 #autotrain_compatible #endpoints_compatible #8-bit #region-us \n", "# hebrew-bad_wiki-gpt_neo-tiny", "## Table of Contents\n- Model Details\n- Uses\n- Risks, Limitations and Biases\n- Training\n- Evaluation\n- Environmental Impact\n- How to Get Started With the Model", "## Model Details\nModel Description:\n\nThe model developer notes that the model is \n> Hebrew nonsense generation model which produces really bad wiki-abstract text. \n\n\n- Developed by: Doron Adler\n- Model Type: Text Generation\n- Language(s): Hebrew\n- License: MIT\n- Resources for more information:\n- GitHub Repo\n- HuggingFace Space", "## Uses", "#### Direct Use\n\nThis model can be used for text generation.", "#### Misuse and Out-of-scope Use", "## Risks, Limitations and Biases\nCONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.\n\nSignificant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).", "## Training", "#### Training Data\n Hebrew Wikipedia Dump (hewiki abstract) from May 2020", "#### Training Procedure\n\n\nThis model was fined tuned upon hebrew-gpt_neo-tiny which was previously trained using EleutherAI's gpt-neo. \n\nFine-tuning on the wiki-absract text was done using @minimaxir's aitextgen.", "## Evaluation", "#### Configs\n\nModel configs for the hebrew-gpt_neo-tiny is available on the hebrew-gpt_neo model github \n\n* Activation Function: gelu\n* Number_Head: 12\n* Number_Vocab: 50257\n* Train batch size: 250\n* Eval batch size: 64\n* Predict batch size: 1", "## Environmental Impact\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). We present the hardware type based on the associated paper.\n\n\n- Hardware Type: [More information needed]\n\n- Hours used: Unknown\n\n- Cloud Provider: GCP tpu-v8s\n\n- Compute Region: europe-west4\n\n- Carbon Emitted: [More information needed]", "## How to Get Started With the Model\n\nA Google Colab Notebook is also available here\n\n\n​​" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) distilgpt2-base-pretrained-he - GGUF - Model creator: https://huggingface.co/Norod78/ - Original model: https://huggingface.co/Norod78/distilgpt2-base-pretrained-he/ | Name | Quant method | Size | | ---- | ---- | ---- | | [distilgpt2-base-pretrained-he.Q2_K.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q2_K.gguf) | Q2_K | 0.06GB | | [distilgpt2-base-pretrained-he.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.IQ3_XS.gguf) | IQ3_XS | 0.07GB | | [distilgpt2-base-pretrained-he.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.IQ3_S.gguf) | IQ3_S | 0.07GB | | [distilgpt2-base-pretrained-he.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q3_K_S.gguf) | Q3_K_S | 0.07GB | | [distilgpt2-base-pretrained-he.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.IQ3_M.gguf) | IQ3_M | 0.07GB | | [distilgpt2-base-pretrained-he.Q3_K.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q3_K.gguf) | Q3_K | 0.07GB | | [distilgpt2-base-pretrained-he.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q3_K_M.gguf) | Q3_K_M | 0.07GB | | [distilgpt2-base-pretrained-he.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q3_K_L.gguf) | Q3_K_L | 0.07GB | | [distilgpt2-base-pretrained-he.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.IQ4_XS.gguf) | IQ4_XS | 0.07GB | | [distilgpt2-base-pretrained-he.Q4_0.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q4_0.gguf) | Q4_0 | 0.08GB | | [distilgpt2-base-pretrained-he.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.IQ4_NL.gguf) | IQ4_NL | 0.08GB | | [distilgpt2-base-pretrained-he.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q4_K_S.gguf) | Q4_K_S | 0.08GB | | [distilgpt2-base-pretrained-he.Q4_K.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q4_K.gguf) | Q4_K | 0.08GB | | [distilgpt2-base-pretrained-he.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q4_K_M.gguf) | Q4_K_M | 0.08GB | | [distilgpt2-base-pretrained-he.Q4_1.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q4_1.gguf) | Q4_1 | 0.08GB | | [distilgpt2-base-pretrained-he.Q5_0.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q5_0.gguf) | Q5_0 | 0.09GB | | [distilgpt2-base-pretrained-he.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q5_K_S.gguf) | Q5_K_S | 0.09GB | | [distilgpt2-base-pretrained-he.Q5_K.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q5_K.gguf) | Q5_K | 0.09GB | | [distilgpt2-base-pretrained-he.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q5_K_M.gguf) | Q5_K_M | 0.09GB | | [distilgpt2-base-pretrained-he.Q5_1.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q5_1.gguf) | Q5_1 | 0.09GB | | [distilgpt2-base-pretrained-he.Q6_K.gguf](https://huggingface.co/RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf/blob/main/distilgpt2-base-pretrained-he.Q6_K.gguf) | Q6_K | 0.1GB | Original model description: --- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "האיש האחרון עלי אדמות ישב לבד בחדרו כשלפתע נשמעה נקישה" - text: "שלום, קרואים לי" - text: "הארי פוטר חייך חיוך נבוך" - text: "החתול שלך מאוד חמוד ו" license: mit --- # distilgpt2-base-pretrained-he A tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. Then was further fine-tuned on GPU. ## Dataset ### oscar (unshuffled deduplicated he) - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ### CC-100 (he) - [HomePage](https://data.statmt.org/cc-100/) This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. ### Misc * Hebrew Twitter * Wikipedia * Various other sources ## Training * Done on a TPUv3-8 VM using [Huggingface's clm-flax example script](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py) <BR> * I have made a list of items which might make it easier for other to use this script. The list was posted to [This discussion forum](https://discuss.huggingface.co/t/ideas-for-beginner-friendlier-tpu-vm-clm-training/8351) * Further training was performed on GPU ## Usage #### Simple usage sample code ```python from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline def main(): model_name="Norod78/distilgpt2-base-pretrained-he" prompt_text = "שלום, קוראים לי" generated_max_length = 192 print("Loading model...") model = AutoModelForCausalLM.from_pretrained(model_name) print('Loading Tokenizer...') tokenizer = AutoTokenizer.from_pretrained(model_name) text_generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer) print("Generating text...") result = text_generator(prompt_text, num_return_sequences=1, batch_size=1, do_sample=True, top_k=40, top_p=0.92, temperature = 1, repetition_penalty=5.0, max_length = generated_max_length) print("result = " + str(result)) if __name__ == '__main__': main() ```
{}
RichardErkhov/Norod78_-_distilgpt2-base-pretrained-he-gguf
null
[ "gguf", "region:us" ]
null
2024-04-17T10:41:27+00:00
[]
[]
TAGS #gguf #region-us
Quantization made by Richard Erkhov. Github Discord Request more models distilgpt2-base-pretrained-he - GGUF * Model creator: URL * Original model: URL Name: distilgpt2-base-pretrained-he.Q2\_K.gguf, Quant method: Q2\_K, Size: 0.06GB Name: distilgpt2-base-pretrained-he.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 0.07GB Name: distilgpt2-base-pretrained-he.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 0.07GB Name: distilgpt2-base-pretrained-he.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 0.07GB Name: distilgpt2-base-pretrained-he.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 0.07GB Name: distilgpt2-base-pretrained-he.Q3\_K.gguf, Quant method: Q3\_K, Size: 0.07GB Name: distilgpt2-base-pretrained-he.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 0.07GB Name: distilgpt2-base-pretrained-he.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 0.07GB Name: distilgpt2-base-pretrained-he.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 0.07GB Name: distilgpt2-base-pretrained-he.Q4\_0.gguf, Quant method: Q4\_0, Size: 0.08GB Name: distilgpt2-base-pretrained-he.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 0.08GB Name: distilgpt2-base-pretrained-he.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 0.08GB Name: distilgpt2-base-pretrained-he.Q4\_K.gguf, Quant method: Q4\_K, Size: 0.08GB Name: distilgpt2-base-pretrained-he.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 0.08GB Name: distilgpt2-base-pretrained-he.Q4\_1.gguf, Quant method: Q4\_1, Size: 0.08GB Name: distilgpt2-base-pretrained-he.Q5\_0.gguf, Quant method: Q5\_0, Size: 0.09GB Name: distilgpt2-base-pretrained-he.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 0.09GB Name: distilgpt2-base-pretrained-he.Q5\_K.gguf, Quant method: Q5\_K, Size: 0.09GB Name: distilgpt2-base-pretrained-he.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 0.09GB Name: distilgpt2-base-pretrained-he.Q5\_1.gguf, Quant method: Q5\_1, Size: 0.09GB Name: distilgpt2-base-pretrained-he.Q6\_K.gguf, Quant method: Q6\_K, Size: 0.1GB Original model description: --------------------------- language: he thumbnail: URL widget: * text: "האיש האחרון עלי אדמות ישב לבד בחדרו כשלפתע נשמעה נקישה" * text: "שלום, קרואים לי" * text: "הארי פוטר חייך חיוך נבוך" * text: "החתול שלך מאוד חמוד ו" license: mit ------------ distilgpt2-base-pretrained-he ============================= A tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the TPU Research Cloud Program. Then was further fine-tuned on GPU. Dataset ------- ### oscar (unshuffled deduplicated he) - Homepage | Dataset Permalink The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ### CC-100 (he) - HomePage This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. ### Misc * Hebrew Twitter * Wikipedia * Various other sources Training -------- * Done on a TPUv3-8 VM using Huggingface's clm-flax example script * I have made a list of items which might make it easier for other to use this script. The list was posted to This discussion forum * Further training was performed on GPU Usage ----- #### Simple usage sample code
[ "### oscar (unshuffled deduplicated he) - Homepage | Dataset Permalink\n\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.", "### CC-100 (he) - HomePage\n\n\nThis corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository.", "### Misc\n\n\n* Hebrew Twitter\n* Wikipedia\n* Various other sources\n\n\nTraining\n--------\n\n\n* Done on a TPUv3-8 VM using Huggingface's clm-flax example script\n* I have made a list of items which might make it easier for other to use this script. The list was posted to This discussion forum\n* Further training was performed on GPU\n\n\nUsage\n-----", "#### Simple usage sample code" ]
[ "TAGS\n#gguf #region-us \n", "### oscar (unshuffled deduplicated he) - Homepage | Dataset Permalink\n\n\nThe Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.", "### CC-100 (he) - HomePage\n\n\nThis corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository.", "### Misc\n\n\n* Hebrew Twitter\n* Wikipedia\n* Various other sources\n\n\nTraining\n--------\n\n\n* Done on a TPUv3-8 VM using Huggingface's clm-flax example script\n* I have made a list of items which might make it easier for other to use this script. The list was posted to This discussion forum\n* Further training was performed on GPU\n\n\nUsage\n-----", "#### Simple usage sample code" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
nahed22/lora-flan-t5-large-chat
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T10:41:57+00:00
[ "1910.09700" ]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) DialoGPT-sarcastic-medium - bnb 4bits - Model creator: https://huggingface.co/abhiramtirumala/ - Original model: https://huggingface.co/abhiramtirumala/DialoGPT-sarcastic-medium/ Original model description: Entry not found
{}
RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-4bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-17T10:45:09+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models DialoGPT-sarcastic-medium - bnb 4bits - Model creator: URL - Original model: URL Original model description: Entry not found
[]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt-2-tamil - bnb 4bits - Model creator: https://huggingface.co/abinayam/ - Original model: https://huggingface.co/abinayam/gpt-2-tamil/ Original model description: --- language: ta datasets: - oscar - IndicNLP widget: - text: 'ஒரு ஊரிலே ஒரு காக்கைக்கு' --- # GPT2-Tamil This repository is created as part of the Flax/Jax community week by Huggingface. The aim of this project is to pretrain a language model using GPT-2 specifically for Tamil language. ## Setup: To setup the project, run the following command, ```python pip install -r requirements.txt ``` ## Model: Pretrained model on Tamil language using a causal language modeling (CLM) objective. ## Dataset Used: The GTP-2 model is trained on [oscar dataset - ta](https://huggingface.co/datasets/oscar) and [IndicNLP dataset - ta](https://indicnlp.ai4bharat.org/corpora/) ## Intended uses & limitations: You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ## How to pretrain the model: To perform training, do the following steps, - Export the model directory (where you want to store the model artifacts like config, tokenizer, etc.) ```python >>> export MODEL_DIR=<model_dir> ``` - Create the config.json by running the following command, ```python >>> python src/create_config.py ``` - Create the tokenizer by running the following command, ```python >>> python src/train_tokenizer.py ``` - Once the config and tokenizer is created, run the following script to start training the flax model ```python >>> python scripts/train_gpt2-oscar-tamil.sh ``` ## How to use: To perform language generation using the model, pipeline can be used directly. - First convert the flax model to pytorch using the following command, ```python python src/convert_flax_to_pytorch.py ``` - Use the following snippet to perform language generation, ```python >>> from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline >>> model_name = 'abinayam/gpt-2-tamil' >>> model = AutoModelWithLMHead.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> set_seed(42) >>> input_text = "ஒரு ஊரிலே ஒரு காக்கைக்கு" >>> max_len = 300 >>> no_seq = 5 >>> generator = pipeline('text-generation', model=model, tokenizer=tokenizer) >>> sequence = generator(input_text, max_length=max_len, num_return_sequences=no_seq) ```
{}
RichardErkhov/abinayam_-_gpt-2-tamil-4bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-17T10:45:12+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models gpt-2-tamil - bnb 4bits - Model creator: URL - Original model: URL Original model description: --- language: ta datasets: - oscar - IndicNLP widget: - text: 'ஒரு ஊரிலே ஒரு காக்கைக்கு' --- # GPT2-Tamil This repository is created as part of the Flax/Jax community week by Huggingface. The aim of this project is to pretrain a language model using GPT-2 specifically for Tamil language. ## Setup: To setup the project, run the following command, ## Model: Pretrained model on Tamil language using a causal language modeling (CLM) objective. ## Dataset Used: The GTP-2 model is trained on oscar dataset - ta and IndicNLP dataset - ta ## Intended uses & limitations: You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. ## How to pretrain the model: To perform training, do the following steps, - Export the model directory (where you want to store the model artifacts like config, tokenizer, etc.) - Create the URL by running the following command, - Create the tokenizer by running the following command, - Once the config and tokenizer is created, run the following script to start training the flax model ## How to use: To perform language generation using the model, pipeline can be used directly. - First convert the flax model to pytorch using the following command, - Use the following snippet to perform language generation,
[ "# GPT2-Tamil\n\nThis repository is created as part of the Flax/Jax community week by Huggingface. The aim of this project is to pretrain a language model using GPT-2 specifically for Tamil language.", "## Setup:\nTo setup the project, run the following command,", "## Model:\nPretrained model on Tamil language using a causal language modeling (CLM) objective.", "## Dataset Used:\nThe GTP-2 model is trained on oscar dataset - ta and IndicNLP dataset - ta", "## Intended uses & limitations:\nYou can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.", "## How to pretrain the model:\nTo perform training, do the following steps,\n\n- Export the model directory (where you want to store the model artifacts like config, tokenizer, etc.)\n\n- Create the URL by running the following command,\n\n- Create the tokenizer by running the following command,\n\n- Once the config and tokenizer is created, run the following script to start training the flax model", "## How to use:\nTo perform language generation using the model, pipeline can be used directly.\n\n- First convert the flax model to pytorch using the following command,\n\n- Use the following snippet to perform language generation," ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# GPT2-Tamil\n\nThis repository is created as part of the Flax/Jax community week by Huggingface. The aim of this project is to pretrain a language model using GPT-2 specifically for Tamil language.", "## Setup:\nTo setup the project, run the following command,", "## Model:\nPretrained model on Tamil language using a causal language modeling (CLM) objective.", "## Dataset Used:\nThe GTP-2 model is trained on oscar dataset - ta and IndicNLP dataset - ta", "## Intended uses & limitations:\nYou can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.", "## How to pretrain the model:\nTo perform training, do the following steps,\n\n- Export the model directory (where you want to store the model artifacts like config, tokenizer, etc.)\n\n- Create the URL by running the following command,\n\n- Create the tokenizer by running the following command,\n\n- Once the config and tokenizer is created, run the following script to start training the flax model", "## How to use:\nTo perform language generation using the model, pipeline can be used directly.\n\n- First convert the flax model to pytorch using the following command,\n\n- Use the following snippet to perform language generation," ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
spygaurad/code_mix_0_4.5k_peft
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T10:45:39+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt-2-tamil - bnb 8bits - Model creator: https://huggingface.co/abinayam/ - Original model: https://huggingface.co/abinayam/gpt-2-tamil/ Original model description: --- language: ta datasets: - oscar - IndicNLP widget: - text: 'ஒரு ஊரிலே ஒரு காக்கைக்கு' --- # GPT2-Tamil This repository is created as part of the Flax/Jax community week by Huggingface. The aim of this project is to pretrain a language model using GPT-2 specifically for Tamil language. ## Setup: To setup the project, run the following command, ```python pip install -r requirements.txt ``` ## Model: Pretrained model on Tamil language using a causal language modeling (CLM) objective. ## Dataset Used: The GTP-2 model is trained on [oscar dataset - ta](https://huggingface.co/datasets/oscar) and [IndicNLP dataset - ta](https://indicnlp.ai4bharat.org/corpora/) ## Intended uses & limitations: You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ## How to pretrain the model: To perform training, do the following steps, - Export the model directory (where you want to store the model artifacts like config, tokenizer, etc.) ```python >>> export MODEL_DIR=<model_dir> ``` - Create the config.json by running the following command, ```python >>> python src/create_config.py ``` - Create the tokenizer by running the following command, ```python >>> python src/train_tokenizer.py ``` - Once the config and tokenizer is created, run the following script to start training the flax model ```python >>> python scripts/train_gpt2-oscar-tamil.sh ``` ## How to use: To perform language generation using the model, pipeline can be used directly. - First convert the flax model to pytorch using the following command, ```python python src/convert_flax_to_pytorch.py ``` - Use the following snippet to perform language generation, ```python >>> from transformers import AutoTokenizer, AutoModelWithLMHead, pipeline >>> model_name = 'abinayam/gpt-2-tamil' >>> model = AutoModelWithLMHead.from_pretrained(model_name) >>> tokenizer = AutoTokenizer.from_pretrained(model_name) >>> set_seed(42) >>> input_text = "ஒரு ஊரிலே ஒரு காக்கைக்கு" >>> max_len = 300 >>> no_seq = 5 >>> generator = pipeline('text-generation', model=model, tokenizer=tokenizer) >>> sequence = generator(input_text, max_length=max_len, num_return_sequences=no_seq) ```
{}
RichardErkhov/abinayam_-_gpt-2-tamil-8bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-17T10:45:47+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models gpt-2-tamil - bnb 8bits - Model creator: URL - Original model: URL Original model description: --- language: ta datasets: - oscar - IndicNLP widget: - text: 'ஒரு ஊரிலே ஒரு காக்கைக்கு' --- # GPT2-Tamil This repository is created as part of the Flax/Jax community week by Huggingface. The aim of this project is to pretrain a language model using GPT-2 specifically for Tamil language. ## Setup: To setup the project, run the following command, ## Model: Pretrained model on Tamil language using a causal language modeling (CLM) objective. ## Dataset Used: The GTP-2 model is trained on oscar dataset - ta and IndicNLP dataset - ta ## Intended uses & limitations: You can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you. ## How to pretrain the model: To perform training, do the following steps, - Export the model directory (where you want to store the model artifacts like config, tokenizer, etc.) - Create the URL by running the following command, - Create the tokenizer by running the following command, - Once the config and tokenizer is created, run the following script to start training the flax model ## How to use: To perform language generation using the model, pipeline can be used directly. - First convert the flax model to pytorch using the following command, - Use the following snippet to perform language generation,
[ "# GPT2-Tamil\n\nThis repository is created as part of the Flax/Jax community week by Huggingface. The aim of this project is to pretrain a language model using GPT-2 specifically for Tamil language.", "## Setup:\nTo setup the project, run the following command,", "## Model:\nPretrained model on Tamil language using a causal language modeling (CLM) objective.", "## Dataset Used:\nThe GTP-2 model is trained on oscar dataset - ta and IndicNLP dataset - ta", "## Intended uses & limitations:\nYou can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.", "## How to pretrain the model:\nTo perform training, do the following steps,\n\n- Export the model directory (where you want to store the model artifacts like config, tokenizer, etc.)\n\n- Create the URL by running the following command,\n\n- Create the tokenizer by running the following command,\n\n- Once the config and tokenizer is created, run the following script to start training the flax model", "## How to use:\nTo perform language generation using the model, pipeline can be used directly.\n\n- First convert the flax model to pytorch using the following command,\n\n- Use the following snippet to perform language generation," ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# GPT2-Tamil\n\nThis repository is created as part of the Flax/Jax community week by Huggingface. The aim of this project is to pretrain a language model using GPT-2 specifically for Tamil language.", "## Setup:\nTo setup the project, run the following command,", "## Model:\nPretrained model on Tamil language using a causal language modeling (CLM) objective.", "## Dataset Used:\nThe GTP-2 model is trained on oscar dataset - ta and IndicNLP dataset - ta", "## Intended uses & limitations:\nYou can use the raw model for next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.", "## How to pretrain the model:\nTo perform training, do the following steps,\n\n- Export the model directory (where you want to store the model artifacts like config, tokenizer, etc.)\n\n- Create the URL by running the following command,\n\n- Create the tokenizer by running the following command,\n\n- Once the config and tokenizer is created, run the following script to start training the flax model", "## How to use:\nTo perform language generation using the model, pipeline can be used directly.\n\n- First convert the flax model to pytorch using the following command,\n\n- Use the following snippet to perform language generation," ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) DialoGPT-sarcastic-medium - bnb 8bits - Model creator: https://huggingface.co/abhiramtirumala/ - Original model: https://huggingface.co/abhiramtirumala/DialoGPT-sarcastic-medium/ Original model description: Entry not found
{}
RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-8bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-17T10:45:48+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models DialoGPT-sarcastic-medium - bnb 8bits - Model creator: URL - Original model: URL Original model description: Entry not found
[]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) DialoGPT-sarcastic-medium - GGUF - Model creator: https://huggingface.co/abhiramtirumala/ - Original model: https://huggingface.co/abhiramtirumala/DialoGPT-sarcastic-medium/ | Name | Quant method | Size | | ---- | ---- | ---- | | [DialoGPT-sarcastic-medium.Q2_K.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q2_K.gguf) | Q2_K | 0.07GB | | [DialoGPT-sarcastic-medium.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.IQ3_XS.gguf) | IQ3_XS | 0.08GB | | [DialoGPT-sarcastic-medium.IQ3_S.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.IQ3_S.gguf) | IQ3_S | 0.08GB | | [DialoGPT-sarcastic-medium.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q3_K_S.gguf) | Q3_K_S | 0.08GB | | [DialoGPT-sarcastic-medium.IQ3_M.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.IQ3_M.gguf) | IQ3_M | 0.09GB | | [DialoGPT-sarcastic-medium.Q3_K.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q3_K.gguf) | Q3_K | 0.09GB | | [DialoGPT-sarcastic-medium.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q3_K_M.gguf) | Q3_K_M | 0.09GB | | [DialoGPT-sarcastic-medium.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q3_K_L.gguf) | Q3_K_L | 0.09GB | | [DialoGPT-sarcastic-medium.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.IQ4_XS.gguf) | IQ4_XS | 0.09GB | | [DialoGPT-sarcastic-medium.Q4_0.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q4_0.gguf) | Q4_0 | 0.1GB | | [DialoGPT-sarcastic-medium.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.IQ4_NL.gguf) | IQ4_NL | 0.1GB | | [DialoGPT-sarcastic-medium.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q4_K_S.gguf) | Q4_K_S | 0.1GB | | [DialoGPT-sarcastic-medium.Q4_K.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q4_K.gguf) | Q4_K | 0.1GB | | [DialoGPT-sarcastic-medium.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q4_K_M.gguf) | Q4_K_M | 0.1GB | | [DialoGPT-sarcastic-medium.Q4_1.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q4_1.gguf) | Q4_1 | 0.1GB | | [DialoGPT-sarcastic-medium.Q5_0.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q5_0.gguf) | Q5_0 | 0.11GB | | [DialoGPT-sarcastic-medium.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q5_K_S.gguf) | Q5_K_S | 0.11GB | | [DialoGPT-sarcastic-medium.Q5_K.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q5_K.gguf) | Q5_K | 0.12GB | | [DialoGPT-sarcastic-medium.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q5_K_M.gguf) | Q5_K_M | 0.12GB | | [DialoGPT-sarcastic-medium.Q5_1.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q5_1.gguf) | Q5_1 | 0.12GB | | [DialoGPT-sarcastic-medium.Q6_K.gguf](https://huggingface.co/RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf/blob/main/DialoGPT-sarcastic-medium.Q6_K.gguf) | Q6_K | 0.13GB | Original model description: Entry not found
{}
RichardErkhov/abhiramtirumala_-_DialoGPT-sarcastic-medium-gguf
null
[ "gguf", "region:us" ]
null
2024-04-17T10:46:20+00:00
[]
[]
TAGS #gguf #region-us
Quantization made by Richard Erkhov. Github Discord Request more models DialoGPT-sarcastic-medium - GGUF * Model creator: URL * Original model: URL Name: DialoGPT-sarcastic-medium.Q2\_K.gguf, Quant method: Q2\_K, Size: 0.07GB Name: DialoGPT-sarcastic-medium.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 0.08GB Name: DialoGPT-sarcastic-medium.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 0.08GB Name: DialoGPT-sarcastic-medium.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 0.08GB Name: DialoGPT-sarcastic-medium.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 0.09GB Name: DialoGPT-sarcastic-medium.Q3\_K.gguf, Quant method: Q3\_K, Size: 0.09GB Name: DialoGPT-sarcastic-medium.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 0.09GB Name: DialoGPT-sarcastic-medium.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 0.09GB Name: DialoGPT-sarcastic-medium.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 0.09GB Name: DialoGPT-sarcastic-medium.Q4\_0.gguf, Quant method: Q4\_0, Size: 0.1GB Name: DialoGPT-sarcastic-medium.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 0.1GB Name: DialoGPT-sarcastic-medium.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 0.1GB Name: DialoGPT-sarcastic-medium.Q4\_K.gguf, Quant method: Q4\_K, Size: 0.1GB Name: DialoGPT-sarcastic-medium.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 0.1GB Name: DialoGPT-sarcastic-medium.Q4\_1.gguf, Quant method: Q4\_1, Size: 0.1GB Name: DialoGPT-sarcastic-medium.Q5\_0.gguf, Quant method: Q5\_0, Size: 0.11GB Name: DialoGPT-sarcastic-medium.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 0.11GB Name: DialoGPT-sarcastic-medium.Q5\_K.gguf, Quant method: Q5\_K, Size: 0.12GB Name: DialoGPT-sarcastic-medium.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 0.12GB Name: DialoGPT-sarcastic-medium.Q5\_1.gguf, Quant method: Q5\_1, Size: 0.12GB Name: DialoGPT-sarcastic-medium.Q6\_K.gguf, Quant method: Q6\_K, Size: 0.13GB Original model description: Entry not found
[]
[ "TAGS\n#gguf #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt2-indonesia - bnb 4bits - Model creator: https://huggingface.co/akahana/ - Original model: https://huggingface.co/akahana/gpt2-indonesia/ Original model description: --- language: "id" widget: - text: "dahulu kala ada sebuah" --- ## how to use ```python from transformers import pipeline, set_seed path = "akahana/gpt2-indonesia" generator = pipeline('text-generation', model=path) set_seed(42) kalimat = "dahulu kala ada sebuah" preds = generator(kalimat, max_length=64, num_return_sequences=3) for data in preds: print(data) {'generated_text': 'dahulu kala ada sebuah perkampungan yang bernama pomere. namun kini kawasan ini sudah tidak dikembangkan lagi sebagai kawasan industri seperti perusahaan pupuk. sumber-sumber lain sudah sulit ditemukan karena belum adanya kilang pupuk milik indonesia yang sering di kembangkan sehingga belum ada satupun yang masih tersisa yang tersisa. kawasan ini juga memproduksi gula aren milik pt graha bina sarana'} {'generated_text': 'dahulu kala ada sebuah desa kecil bernama desa. desa yang terkenal seperti halnya kota terdekat lainnya adalah desa tetangga yang bernama sama."\n"sebuah masjid merupakan suatu tempat suci yang digunakan umat islam untuk beribadah. beberapa masjid yang didaftarkan berikut memiliki suatu kehormatan tersendiri bagi masing-masing denominasi islam di dunia. sebuah masjid selain memiliki fungsi sebagai tempat'} {'generated_text': 'dahulu kala ada sebuah peradaban yang dibangun di sebelah barat sungai mississippi di sekitar desa kecil desa yang bernama sama. penduduk asli di desa ini berasal dari etnis teweh yang berpindah agama menjadi kristen, namun kemudian pindah agama menjadi kristen. desa arawak mempunyai beberapa desa lain seperti adibei, deti, riuhut dan sa'} ```
{}
RichardErkhov/akahana_-_gpt2-indonesia-4bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-17T10:46:42+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models gpt2-indonesia - bnb 4bits - Model creator: URL - Original model: URL Original model description: --- language: "id" widget: - text: "dahulu kala ada sebuah" --- ## how to use
[ "## how to use" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "## how to use" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Noodlz/Dolph-Lund-Wizard-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Dolph-Lund-Wizard-7B-GGUF/resolve/main/Dolph-Lund-Wizard-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "Noodlz/Dolph-Lund-Wizard-7B", "quantized_by": "mradermacher"}
mradermacher/Dolph-Lund-Wizard-7B-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Noodlz/Dolph-Lund-Wizard-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-17T10:46:42+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mergekit #merge #en #base_model-Noodlz/Dolph-Lund-Wizard-7B #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #mergekit #merge #en #base_model-Noodlz/Dolph-Lund-Wizard-7B #license-apache-2.0 #endpoints_compatible #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt2-indonesia - bnb 8bits - Model creator: https://huggingface.co/akahana/ - Original model: https://huggingface.co/akahana/gpt2-indonesia/ Original model description: --- language: "id" widget: - text: "dahulu kala ada sebuah" --- ## how to use ```python from transformers import pipeline, set_seed path = "akahana/gpt2-indonesia" generator = pipeline('text-generation', model=path) set_seed(42) kalimat = "dahulu kala ada sebuah" preds = generator(kalimat, max_length=64, num_return_sequences=3) for data in preds: print(data) {'generated_text': 'dahulu kala ada sebuah perkampungan yang bernama pomere. namun kini kawasan ini sudah tidak dikembangkan lagi sebagai kawasan industri seperti perusahaan pupuk. sumber-sumber lain sudah sulit ditemukan karena belum adanya kilang pupuk milik indonesia yang sering di kembangkan sehingga belum ada satupun yang masih tersisa yang tersisa. kawasan ini juga memproduksi gula aren milik pt graha bina sarana'} {'generated_text': 'dahulu kala ada sebuah desa kecil bernama desa. desa yang terkenal seperti halnya kota terdekat lainnya adalah desa tetangga yang bernama sama."\n"sebuah masjid merupakan suatu tempat suci yang digunakan umat islam untuk beribadah. beberapa masjid yang didaftarkan berikut memiliki suatu kehormatan tersendiri bagi masing-masing denominasi islam di dunia. sebuah masjid selain memiliki fungsi sebagai tempat'} {'generated_text': 'dahulu kala ada sebuah peradaban yang dibangun di sebelah barat sungai mississippi di sekitar desa kecil desa yang bernama sama. penduduk asli di desa ini berasal dari etnis teweh yang berpindah agama menjadi kristen, namun kemudian pindah agama menjadi kristen. desa arawak mempunyai beberapa desa lain seperti adibei, deti, riuhut dan sa'} ```
{}
RichardErkhov/akahana_-_gpt2-indonesia-8bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-17T10:47:06+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models gpt2-indonesia - bnb 8bits - Model creator: URL - Original model: URL Original model description: --- language: "id" widget: - text: "dahulu kala ada sebuah" --- ## how to use
[ "## how to use" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "## how to use" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt2-small-arabic - bnb 4bits - Model creator: https://huggingface.co/akhooli/ - Original model: https://huggingface.co/akhooli/gpt2-small-arabic/ Original model description: --- language: "ar" datasets: - Arabic Wikipedia metrics: - none --- # GPT2-Small-Arabic ## Model description GPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2). ## Intended uses & limitations #### How to use An example is provided in this [colab notebook](https://colab.research.google.com/drive/1mRl7c-5v-Klx27EEAEOAbrfkustL4g7a?usp=sharing). Both text and poetry (fine-tuned model) generation are included. #### Limitations and bias GPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance. Use as demonstration or proof of concepts but not as production code. ## Training data This pretrained model used the Arabic Wikipedia dump (around 900 MB). ## Training procedure Training was done using [Fastai2](https://github.com/fastai/fastai2/) library on Kaggle, using free GPU. ## Eval results Final perplexity reached was 72.19, loss: 4.28, accuracy: 0.307 ### BibTeX entry and citation info ```bibtex @inproceedings{Abed Khooli, year={2020} } ```
{}
RichardErkhov/akhooli_-_gpt2-small-arabic-4bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-17T10:48:02+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models gpt2-small-arabic - bnb 4bits - Model creator: URL - Original model: URL Original model description: --- language: "ar" datasets: - Arabic Wikipedia metrics: - none --- # GPT2-Small-Arabic ## Model description GPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2). ## Intended uses & limitations #### How to use An example is provided in this colab notebook. Both text and poetry (fine-tuned model) generation are included. #### Limitations and bias GPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance. Use as demonstration or proof of concepts but not as production code. ## Training data This pretrained model used the Arabic Wikipedia dump (around 900 MB). ## Training procedure Training was done using Fastai2 library on Kaggle, using free GPU. ## Eval results Final perplexity reached was 72.19, loss: 4.28, accuracy: 0.307 ### BibTeX entry and citation info
[ "# GPT2-Small-Arabic", "## Model description\n\nGPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2).", "## Intended uses & limitations", "#### How to use\n\nAn example is provided in this colab notebook. \nBoth text and poetry (fine-tuned model) generation are included.", "#### Limitations and bias\n\nGPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance. \nUse as demonstration or proof of concepts but not as production code.", "## Training data\n\nThis pretrained model used the Arabic Wikipedia dump (around 900 MB).", "## Training procedure\n\nTraining was done using Fastai2 library on Kaggle, using free GPU.", "## Eval results \nFinal perplexity reached was 72.19, loss: 4.28, accuracy: 0.307", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# GPT2-Small-Arabic", "## Model description\n\nGPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2).", "## Intended uses & limitations", "#### How to use\n\nAn example is provided in this colab notebook. \nBoth text and poetry (fine-tuned model) generation are included.", "#### Limitations and bias\n\nGPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance. \nUse as demonstration or proof of concepts but not as production code.", "## Training data\n\nThis pretrained model used the Arabic Wikipedia dump (around 900 MB).", "## Training procedure\n\nTraining was done using Fastai2 library on Kaggle, using free GPU.", "## Eval results \nFinal perplexity reached was 72.19, loss: 4.28, accuracy: 0.307", "### BibTeX entry and citation info" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt2-small-arabic - bnb 8bits - Model creator: https://huggingface.co/akhooli/ - Original model: https://huggingface.co/akhooli/gpt2-small-arabic/ Original model description: --- language: "ar" datasets: - Arabic Wikipedia metrics: - none --- # GPT2-Small-Arabic ## Model description GPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2). ## Intended uses & limitations #### How to use An example is provided in this [colab notebook](https://colab.research.google.com/drive/1mRl7c-5v-Klx27EEAEOAbrfkustL4g7a?usp=sharing). Both text and poetry (fine-tuned model) generation are included. #### Limitations and bias GPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance. Use as demonstration or proof of concepts but not as production code. ## Training data This pretrained model used the Arabic Wikipedia dump (around 900 MB). ## Training procedure Training was done using [Fastai2](https://github.com/fastai/fastai2/) library on Kaggle, using free GPU. ## Eval results Final perplexity reached was 72.19, loss: 4.28, accuracy: 0.307 ### BibTeX entry and citation info ```bibtex @inproceedings{Abed Khooli, year={2020} } ```
{}
RichardErkhov/akhooli_-_gpt2-small-arabic-8bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-17T10:48:28+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models gpt2-small-arabic - bnb 8bits - Model creator: URL - Original model: URL Original model description: --- language: "ar" datasets: - Arabic Wikipedia metrics: - none --- # GPT2-Small-Arabic ## Model description GPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2). ## Intended uses & limitations #### How to use An example is provided in this colab notebook. Both text and poetry (fine-tuned model) generation are included. #### Limitations and bias GPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance. Use as demonstration or proof of concepts but not as production code. ## Training data This pretrained model used the Arabic Wikipedia dump (around 900 MB). ## Training procedure Training was done using Fastai2 library on Kaggle, using free GPU. ## Eval results Final perplexity reached was 72.19, loss: 4.28, accuracy: 0.307 ### BibTeX entry and citation info
[ "# GPT2-Small-Arabic", "## Model description\n\nGPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2).", "## Intended uses & limitations", "#### How to use\n\nAn example is provided in this colab notebook. \nBoth text and poetry (fine-tuned model) generation are included.", "#### Limitations and bias\n\nGPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance. \nUse as demonstration or proof of concepts but not as production code.", "## Training data\n\nThis pretrained model used the Arabic Wikipedia dump (around 900 MB).", "## Training procedure\n\nTraining was done using Fastai2 library on Kaggle, using free GPU.", "## Eval results \nFinal perplexity reached was 72.19, loss: 4.28, accuracy: 0.307", "### BibTeX entry and citation info" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# GPT2-Small-Arabic", "## Model description\n\nGPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2).", "## Intended uses & limitations", "#### How to use\n\nAn example is provided in this colab notebook. \nBoth text and poetry (fine-tuned model) generation are included.", "#### Limitations and bias\n\nGPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance. \nUse as demonstration or proof of concepts but not as production code.", "## Training data\n\nThis pretrained model used the Arabic Wikipedia dump (around 900 MB).", "## Training procedure\n\nTraining was done using Fastai2 library on Kaggle, using free GPU.", "## Eval results \nFinal perplexity reached was 72.19, loss: 4.28, accuracy: 0.307", "### BibTeX entry and citation info" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) gpt2-small-arabic - GGUF - Model creator: https://huggingface.co/akhooli/ - Original model: https://huggingface.co/akhooli/gpt2-small-arabic/ | Name | Quant method | Size | | ---- | ---- | ---- | | [gpt2-small-arabic.Q2_K.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q2_K.gguf) | Q2_K | 0.08GB | | [gpt2-small-arabic.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.IQ3_XS.gguf) | IQ3_XS | 0.08GB | | [gpt2-small-arabic.IQ3_S.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.IQ3_S.gguf) | IQ3_S | 0.08GB | | [gpt2-small-arabic.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q3_K_S.gguf) | Q3_K_S | 0.08GB | | [gpt2-small-arabic.IQ3_M.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.IQ3_M.gguf) | IQ3_M | 0.09GB | | [gpt2-small-arabic.Q3_K.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q3_K.gguf) | Q3_K | 0.09GB | | [gpt2-small-arabic.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q3_K_M.gguf) | Q3_K_M | 0.09GB | | [gpt2-small-arabic.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q3_K_L.gguf) | Q3_K_L | 0.1GB | | [gpt2-small-arabic.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.IQ4_XS.gguf) | IQ4_XS | 0.1GB | | [gpt2-small-arabic.Q4_0.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q4_0.gguf) | Q4_0 | 0.1GB | | [gpt2-small-arabic.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.IQ4_NL.gguf) | IQ4_NL | 0.1GB | | [gpt2-small-arabic.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q4_K_S.gguf) | Q4_K_S | 0.1GB | | [gpt2-small-arabic.Q4_K.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q4_K.gguf) | Q4_K | 0.11GB | | [gpt2-small-arabic.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q4_K_M.gguf) | Q4_K_M | 0.11GB | | [gpt2-small-arabic.Q4_1.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q4_1.gguf) | Q4_1 | 0.11GB | | [gpt2-small-arabic.Q5_0.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q5_0.gguf) | Q5_0 | 0.11GB | | [gpt2-small-arabic.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q5_K_S.gguf) | Q5_K_S | 0.11GB | | [gpt2-small-arabic.Q5_K.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q5_K.gguf) | Q5_K | 0.12GB | | [gpt2-small-arabic.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q5_K_M.gguf) | Q5_K_M | 0.12GB | | [gpt2-small-arabic.Q5_1.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q5_1.gguf) | Q5_1 | 0.12GB | | [gpt2-small-arabic.Q6_K.gguf](https://huggingface.co/RichardErkhov/akhooli_-_gpt2-small-arabic-gguf/blob/main/gpt2-small-arabic.Q6_K.gguf) | Q6_K | 0.13GB | Original model description: --- language: "ar" datasets: - Arabic Wikipedia metrics: - none --- # GPT2-Small-Arabic ## Model description GPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2). ## Intended uses & limitations #### How to use An example is provided in this [colab notebook](https://colab.research.google.com/drive/1mRl7c-5v-Klx27EEAEOAbrfkustL4g7a?usp=sharing). Both text and poetry (fine-tuned model) generation are included. #### Limitations and bias GPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance. Use as demonstration or proof of concepts but not as production code. ## Training data This pretrained model used the Arabic Wikipedia dump (around 900 MB). ## Training procedure Training was done using [Fastai2](https://github.com/fastai/fastai2/) library on Kaggle, using free GPU. ## Eval results Final perplexity reached was 72.19, loss: 4.28, accuracy: 0.307 ### BibTeX entry and citation info ```bibtex @inproceedings{Abed Khooli, year={2020} } ```
{}
RichardErkhov/akhooli_-_gpt2-small-arabic-gguf
null
[ "gguf", "region:us" ]
null
2024-04-17T10:49:02+00:00
[]
[]
TAGS #gguf #region-us
Quantization made by Richard Erkhov. Github Discord Request more models gpt2-small-arabic - GGUF * Model creator: URL * Original model: URL Name: gpt2-small-arabic.Q2\_K.gguf, Quant method: Q2\_K, Size: 0.08GB Name: gpt2-small-arabic.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 0.08GB Name: gpt2-small-arabic.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 0.08GB Name: gpt2-small-arabic.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 0.08GB Name: gpt2-small-arabic.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 0.09GB Name: gpt2-small-arabic.Q3\_K.gguf, Quant method: Q3\_K, Size: 0.09GB Name: gpt2-small-arabic.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 0.09GB Name: gpt2-small-arabic.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 0.1GB Name: gpt2-small-arabic.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 0.1GB Name: gpt2-small-arabic.Q4\_0.gguf, Quant method: Q4\_0, Size: 0.1GB Name: gpt2-small-arabic.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 0.1GB Name: gpt2-small-arabic.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 0.1GB Name: gpt2-small-arabic.Q4\_K.gguf, Quant method: Q4\_K, Size: 0.11GB Name: gpt2-small-arabic.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 0.11GB Name: gpt2-small-arabic.Q4\_1.gguf, Quant method: Q4\_1, Size: 0.11GB Name: gpt2-small-arabic.Q5\_0.gguf, Quant method: Q5\_0, Size: 0.11GB Name: gpt2-small-arabic.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 0.11GB Name: gpt2-small-arabic.Q5\_K.gguf, Quant method: Q5\_K, Size: 0.12GB Name: gpt2-small-arabic.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 0.12GB Name: gpt2-small-arabic.Q5\_1.gguf, Quant method: Q5\_1, Size: 0.12GB Name: gpt2-small-arabic.Q6\_K.gguf, Quant method: Q6\_K, Size: 0.13GB Original model description: --------------------------- language: "ar" datasets: * Arabic Wikipedia metrics: * none --- GPT2-Small-Arabic ================= Model description ----------------- GPT2 model from Arabic Wikipedia dataset based on gpt2-small (using Fastai2). Intended uses & limitations --------------------------- #### How to use An example is provided in this colab notebook. Both text and poetry (fine-tuned model) generation are included. #### Limitations and bias GPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance. Use as demonstration or proof of concepts but not as production code. Training data ------------- This pretrained model used the Arabic Wikipedia dump (around 900 MB). Training procedure ------------------ Training was done using Fastai2 library on Kaggle, using free GPU. Eval results ------------ Final perplexity reached was 72.19, loss: 4.28, accuracy: 0.307 ### BibTeX entry and citation info
[ "#### How to use\n\n\nAn example is provided in this colab notebook.\nBoth text and poetry (fine-tuned model) generation are included.", "#### Limitations and bias\n\n\nGPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance.\nUse as demonstration or proof of concepts but not as production code.\n\n\nTraining data\n-------------\n\n\nThis pretrained model used the Arabic Wikipedia dump (around 900 MB).\n\n\nTraining procedure\n------------------\n\n\nTraining was done using Fastai2 library on Kaggle, using free GPU.\n\n\nEval results\n------------\n\n\nFinal perplexity reached was 72.19, loss: 4.28, accuracy: 0.307", "### BibTeX entry and citation info" ]
[ "TAGS\n#gguf #region-us \n", "#### How to use\n\n\nAn example is provided in this colab notebook.\nBoth text and poetry (fine-tuned model) generation are included.", "#### Limitations and bias\n\n\nGPT2-small-arabic (trained on Arabic Wikipedia) has several limitations in terms of coverage (Arabic Wikipeedia quality, no diacritics) and training performance.\nUse as demonstration or proof of concepts but not as production code.\n\n\nTraining data\n-------------\n\n\nThis pretrained model used the Arabic Wikipedia dump (around 900 MB).\n\n\nTraining procedure\n------------------\n\n\nTraining was done using Fastai2 library on Kaggle, using free GPU.\n\n\nEval results\n------------\n\n\nFinal perplexity reached was 72.19, loss: 4.28, accuracy: 0.307", "### BibTeX entry and citation info" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) belgpt2 - bnb 4bits - Model creator: https://huggingface.co/antoinelouis/ - Original model: https://huggingface.co/antoinelouis/belgpt2/ Original model description: --- language: - fr license: - mit widget: - text: Hier, Elon Musk a - text: Pourquoi a-t-il - text: Tout à coup, elle metrics: - perplexity library_name: transformers pipeline_tag: text-generation --- # BelGPT-2 **The 1st GPT-2 model pre-trained on a very large and heterogeneous French corpus (~60Gb).** ## Usage You can use BelGPT-2 with [🤗 transformers](https://github.com/huggingface/transformers): ```python import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel # Load pretrained model and tokenizer model = GPT2LMHeadModel.from_pretrained("antoiloui/belgpt2") tokenizer = GPT2Tokenizer.from_pretrained("antoiloui/belgpt2") # Generate a sample of text model.eval() output = model.generate( bos_token_id=random.randint(1,50000), do_sample=True, top_k=50, max_length=100, top_p=0.95, num_return_sequences=1 ) # Decode it decoded_output = [] for sample in output: decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True)) print(decoded_output) ``` ## Data Below is the list of all French copora used to pre-trained the model: | Dataset | `$corpus_name` | Raw size | Cleaned size | | :------| :--- | :---: | :---: | | CommonCrawl | `common_crawl` | 200.2 GB | 40.4 GB | | NewsCrawl | `news_crawl` | 10.4 GB | 9.8 GB | | Wikipedia | `wiki` | 19.4 GB | 4.1 GB | | Wikisource | `wikisource` | 4.6 GB | 2.3 GB | | Project Gutenberg | `gutenberg` | 1.3 GB | 1.1 GB | | EuroParl | `europarl` | 289.9 MB | 278.7 MB | | NewsCommentary | `news_commentary` | 61.4 MB | 58.1 MB | | **Total** | | **236.3 GB** | **57.9 GB** | ## Documentation Detailed documentation on the pre-trained model, its implementation, and the data can be found [here](https://github.com/ant-louis/belgpt2/blob/master/docs/index.md). ## Citation For attribution in academic contexts, please cite this work as: ``` @misc{louis2020belgpt2, author = {Louis, Antoine}, title = {{BelGPT-2: A GPT-2 Model Pre-trained on French Corpora}}, year = {2020}, howpublished = {\url{https://github.com/ant-louis/belgpt2}}, } ```
{}
RichardErkhov/antoinelouis_-_belgpt2-4bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-17T10:49:32+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models belgpt2 - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: * fr license: * mit widget: * text: Hier, Elon Musk a * text: Pourquoi a-t-il * text: Tout à coup, elle metrics: * perplexity library\_name: transformers pipeline\_tag: text-generation --- BelGPT-2 ======== The 1st GPT-2 model pre-trained on a very large and heterogeneous French corpus (~60Gb). Usage ----- You can use BelGPT-2 with transformers: Data ---- Below is the list of all French copora used to pre-trained the model: Documentation ------------- Detailed documentation on the pre-trained model, its implementation, and the data can be found here. For attribution in academic contexts, please cite this work as:
[]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) belgpt2 - bnb 8bits - Model creator: https://huggingface.co/antoinelouis/ - Original model: https://huggingface.co/antoinelouis/belgpt2/ Original model description: --- language: - fr license: - mit widget: - text: Hier, Elon Musk a - text: Pourquoi a-t-il - text: Tout à coup, elle metrics: - perplexity library_name: transformers pipeline_tag: text-generation --- # BelGPT-2 **The 1st GPT-2 model pre-trained on a very large and heterogeneous French corpus (~60Gb).** ## Usage You can use BelGPT-2 with [🤗 transformers](https://github.com/huggingface/transformers): ```python import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel # Load pretrained model and tokenizer model = GPT2LMHeadModel.from_pretrained("antoiloui/belgpt2") tokenizer = GPT2Tokenizer.from_pretrained("antoiloui/belgpt2") # Generate a sample of text model.eval() output = model.generate( bos_token_id=random.randint(1,50000), do_sample=True, top_k=50, max_length=100, top_p=0.95, num_return_sequences=1 ) # Decode it decoded_output = [] for sample in output: decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True)) print(decoded_output) ``` ## Data Below is the list of all French copora used to pre-trained the model: | Dataset | `$corpus_name` | Raw size | Cleaned size | | :------| :--- | :---: | :---: | | CommonCrawl | `common_crawl` | 200.2 GB | 40.4 GB | | NewsCrawl | `news_crawl` | 10.4 GB | 9.8 GB | | Wikipedia | `wiki` | 19.4 GB | 4.1 GB | | Wikisource | `wikisource` | 4.6 GB | 2.3 GB | | Project Gutenberg | `gutenberg` | 1.3 GB | 1.1 GB | | EuroParl | `europarl` | 289.9 MB | 278.7 MB | | NewsCommentary | `news_commentary` | 61.4 MB | 58.1 MB | | **Total** | | **236.3 GB** | **57.9 GB** | ## Documentation Detailed documentation on the pre-trained model, its implementation, and the data can be found [here](https://github.com/ant-louis/belgpt2/blob/master/docs/index.md). ## Citation For attribution in academic contexts, please cite this work as: ``` @misc{louis2020belgpt2, author = {Louis, Antoine}, title = {{BelGPT-2: A GPT-2 Model Pre-trained on French Corpora}}, year = {2020}, howpublished = {\url{https://github.com/ant-louis/belgpt2}}, } ```
{}
RichardErkhov/antoinelouis_-_belgpt2-8bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-17T10:50:01+00:00
[]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models belgpt2 - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: * fr license: * mit widget: * text: Hier, Elon Musk a * text: Pourquoi a-t-il * text: Tout à coup, elle metrics: * perplexity library\_name: transformers pipeline\_tag: text-generation --- BelGPT-2 ======== The 1st GPT-2 model pre-trained on a very large and heterogeneous French corpus (~60Gb). Usage ----- You can use BelGPT-2 with transformers: Data ---- Below is the list of all French copora used to pre-trained the model: Documentation ------------- Detailed documentation on the pre-trained model, its implementation, and the data can be found here. For attribution in academic contexts, please cite this work as:
[]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="arh/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
arh/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-17T10:50:05+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
text-generation
transformers
![Tesoro](https://huggingface.co/migtissera/Tess-2.0-Mixtral-8x22B/resolve/main/Tess-2.png) # Tess-2.0-Mixtral-8x22B Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base. # Prompt Format ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ``` # Training Methodology Tess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions. The model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible. # Sample code to run inference ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "migtissera/Tess-2.0-Mixtral-8x22B" output_file_path = "./conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.5, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: Answer the question thoughtfully and intelligently. Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` # Join My General AI Discord (NeuroLattice): https://discord.gg/Hz6GrwGFKD # Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model.
{"license": "apache-2.0"}
blockblockblock/Tess-2.0-Mixtral-8x22B-bpw2.5
null
[ "transformers", "safetensors", "mixtral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T10:50:06+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
!Tesoro # Tess-2.0-Mixtral-8x22B Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base. # Prompt Format # Training Methodology Tess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions. The model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible. # Sample code to run inference # Join My General AI Discord (NeuroLattice): URL # Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model.
[ "# Tess-2.0-Mixtral-8x22B\nTess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base.", "# Prompt Format", "# Training Methodology\nTess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions.\n\nThe model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible.", "# Sample code to run inference", "# Join My General AI Discord (NeuroLattice):\nURL", "# Limitations & Biases:\n\nWhile this model aims for accuracy, it can occasionally produce inaccurate or misleading results. \n\nDespite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. \n\nExercise caution and cross-check information when necessary. This is an uncensored model." ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Tess-2.0-Mixtral-8x22B\nTess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base.", "# Prompt Format", "# Training Methodology\nTess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions.\n\nThe model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible.", "# Sample code to run inference", "# Join My General AI Discord (NeuroLattice):\nURL", "# Limitations & Biases:\n\nWhile this model aims for accuracy, it can occasionally produce inaccurate or misleading results. \n\nDespite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. \n\nExercise caution and cross-check information when necessary. This is an uncensored model." ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="arh/baba", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "baba", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.52 +/- 2.67", "name": "mean_reward", "verified": false}]}]}]}
arh/baba
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-17T10:50:54+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) aragpt2-base - bnb 4bits - Model creator: https://huggingface.co/aubmindlab/ - Original model: https://huggingface.co/aubmindlab/aragpt2-base/ Original model description: --- language: ar datasets: - wikipedia - Osian - 1.5B-Arabic-Corpus - oscar-arabic-unshuffled - Assafir(private) widget: - text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال" - text: "القدس مدينة تاريخية، بناها الكنعانيون في" - text: "كان يا ما كان في قديم الزمان" --- # Arabic GPT2 <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/> You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520) The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository. These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library. GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`). Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. # Usage ## Testing the model using `transformers`: ```python from transformers import GPT2TokenizerFast, pipeline #for base and medium from transformers import GPT2LMHeadModel #for large and mega # pip install arabert from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel from arabert.preprocess import ArabertPreprocessor MODEL_NAME='aubmindlab/aragpt2-base' arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME) text="" text_clean = arabert_prep.preprocess(text) model = GPT2LMHeadModel.from_pretrained(MODEL_NAME) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME) generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer) #feel free to try different decoding settings generation_pipeline(text, pad_token_id=tokenizer.eos_token_id, num_beams=10, max_length=200, top_p=0.9, repetition_penalty = 3.0, no_repeat_ngram_size = 3)[0]['generated_text'] ``` ## Finetunning using `transformers`: Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed) ## Finetuning using our code with TF 1.15.4: Create the Training TFRecords: ```bash python create_pretraining_data.py --input_file=<RAW TEXT FILE with documents/article separated by an empty line> --output_file=<OUTPUT TFRecord> --tokenizer_dir=<Directory with the GPT2 Tokenizer files> ``` Finetuning: ```bash python3 run_pretraining.py \\r\n --input_file="gs://<GS_BUCKET>/pretraining_data/*" \\r\n --output_dir="gs://<GS_BUCKET>/pretraining_model/" \\r\n --config_file="config/small_hparams.json" \\r\n --batch_size=128 \\r\n --eval_batch_size=8 \\r\n --num_train_steps= \\r\n --num_warmup_steps= \\r\n --learning_rate= \\r\n --save_checkpoints_steps= \\r\n --max_seq_length=1024 \\r\n --max_eval_steps= \\r\n --optimizer="lamb" \\r\n --iterations_per_loop=5000 \\r\n --keep_checkpoint_max=10 \\r\n --use_tpu=True \\r\n --tpu_name=<TPU NAME> \\r\n --do_train=True \\r\n --do_eval=False ``` # Model Sizes Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params | ---|:---:|:---:|:---:|:---:|:---:|:---: AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB / 135M | AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M | AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M | AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Compute Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5 AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5 AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3 AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9 # Dataset The pretraining data used for the new AraGPT2 model is also used for **AraBERTv2 and AraELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus after we thoroughly filter it, to the dataset used in AraBERTv1 but without the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Disclaimer The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-aragpt2, title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.21", pages = "196--207", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{}
RichardErkhov/aubmindlab_-_aragpt2-base-4bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:2012.15520", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-17T10:51:11+00:00
[ "2012.15520" ]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #arxiv-2012.15520 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models aragpt2-base - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: ar datasets: * wikipedia * Osian * 1.5B-Arabic-Corpus * oscar-arabic-unshuffled * Assafir(private) widget: * text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال" * text: "القدس مدينة تاريخية، بناها الكنعانيون في" * text: "كان يا ما كان في قديم الزمان" --- Arabic GPT2 =========== <img src="URL width="100" align="left"/> You can find more information in our paper AraGPT2 The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the 'gpt2' folder and can trains models from the minimaxir/gpt-2-simple repository. These models were trained using the 'lamb' optimizer and follow the same architecture as 'gpt2' and are fully compatible with the 'transformers' library. GPT2-large and GPT2-mega were trained using the imcaspar/gpt2-ml library, and follow the 'grover' architecture. You can use the pytorch classes found in 'grover/modeling\_gpt2.py' as a direct replacement for classes in the 'transformers' library (it should support version 'v4.x' from 'transformers'). Both models are trained using the 'adafactor' optimizer, since the 'adam' and 'lamb' optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. Usage ===== Testing the model using 'transformers': --------------------------------------- Finetunning using 'transformers': --------------------------------- Follow the guide linked here Finetuning using our code with TF 1.15.4: ----------------------------------------- Create the Training TFRecords: Finetuning: Model Sizes =========== All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Compute ------- Dataset ======= The pretraining data used for the new AraGPT2 model is also used for AraBERTv2 and AraELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus after we thoroughly filter it, to the dataset used in AraBERTv1 but without the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data Disclaimer ========== The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. If you used this model please cite us as : ========================================== Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continuous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-2012.15520 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) aragpt2-base - bnb 8bits - Model creator: https://huggingface.co/aubmindlab/ - Original model: https://huggingface.co/aubmindlab/aragpt2-base/ Original model description: --- language: ar datasets: - wikipedia - Osian - 1.5B-Arabic-Corpus - oscar-arabic-unshuffled - Assafir(private) widget: - text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال" - text: "القدس مدينة تاريخية، بناها الكنعانيون في" - text: "كان يا ما كان في قديم الزمان" --- # Arabic GPT2 <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/> You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520) The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository. These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library. GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`). Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. # Usage ## Testing the model using `transformers`: ```python from transformers import GPT2TokenizerFast, pipeline #for base and medium from transformers import GPT2LMHeadModel #for large and mega # pip install arabert from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel from arabert.preprocess import ArabertPreprocessor MODEL_NAME='aubmindlab/aragpt2-base' arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME) text="" text_clean = arabert_prep.preprocess(text) model = GPT2LMHeadModel.from_pretrained(MODEL_NAME) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME) generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer) #feel free to try different decoding settings generation_pipeline(text, pad_token_id=tokenizer.eos_token_id, num_beams=10, max_length=200, top_p=0.9, repetition_penalty = 3.0, no_repeat_ngram_size = 3)[0]['generated_text'] ``` ## Finetunning using `transformers`: Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed) ## Finetuning using our code with TF 1.15.4: Create the Training TFRecords: ```bash python create_pretraining_data.py --input_file=<RAW TEXT FILE with documents/article separated by an empty line> --output_file=<OUTPUT TFRecord> --tokenizer_dir=<Directory with the GPT2 Tokenizer files> ``` Finetuning: ```bash python3 run_pretraining.py \\r\n --input_file="gs://<GS_BUCKET>/pretraining_data/*" \\r\n --output_dir="gs://<GS_BUCKET>/pretraining_model/" \\r\n --config_file="config/small_hparams.json" \\r\n --batch_size=128 \\r\n --eval_batch_size=8 \\r\n --num_train_steps= \\r\n --num_warmup_steps= \\r\n --learning_rate= \\r\n --save_checkpoints_steps= \\r\n --max_seq_length=1024 \\r\n --max_eval_steps= \\r\n --optimizer="lamb" \\r\n --iterations_per_loop=5000 \\r\n --keep_checkpoint_max=10 \\r\n --use_tpu=True \\r\n --tpu_name=<TPU NAME> \\r\n --do_train=True \\r\n --do_eval=False ``` # Model Sizes Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params | ---|:---:|:---:|:---:|:---:|:---:|:---: AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB / 135M | AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M | AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M | AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Compute Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5 AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5 AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3 AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9 # Dataset The pretraining data used for the new AraGPT2 model is also used for **AraBERTv2 and AraELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus after we thoroughly filter it, to the dataset used in AraBERTv1 but without the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Disclaimer The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-aragpt2, title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.21", pages = "196--207", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{}
RichardErkhov/aubmindlab_-_aragpt2-base-8bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:2012.15520", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-17T10:51:40+00:00
[ "2012.15520" ]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #arxiv-2012.15520 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models aragpt2-base - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: ar datasets: * wikipedia * Osian * 1.5B-Arabic-Corpus * oscar-arabic-unshuffled * Assafir(private) widget: * text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال" * text: "القدس مدينة تاريخية، بناها الكنعانيون في" * text: "كان يا ما كان في قديم الزمان" --- Arabic GPT2 =========== <img src="URL width="100" align="left"/> You can find more information in our paper AraGPT2 The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the 'gpt2' folder and can trains models from the minimaxir/gpt-2-simple repository. These models were trained using the 'lamb' optimizer and follow the same architecture as 'gpt2' and are fully compatible with the 'transformers' library. GPT2-large and GPT2-mega were trained using the imcaspar/gpt2-ml library, and follow the 'grover' architecture. You can use the pytorch classes found in 'grover/modeling\_gpt2.py' as a direct replacement for classes in the 'transformers' library (it should support version 'v4.x' from 'transformers'). Both models are trained using the 'adafactor' optimizer, since the 'adam' and 'lamb' optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. Usage ===== Testing the model using 'transformers': --------------------------------------- Finetunning using 'transformers': --------------------------------- Follow the guide linked here Finetuning using our code with TF 1.15.4: ----------------------------------------- Create the Training TFRecords: Finetuning: Model Sizes =========== All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Compute ------- Dataset ======= The pretraining data used for the new AraGPT2 model is also used for AraBERTv2 and AraELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus after we thoroughly filter it, to the dataset used in AraBERTv1 but without the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data Disclaimer ========== The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. If you used this model please cite us as : ========================================== Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continuous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-2012.15520 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n" ]
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) aragpt2-base - GGUF - Model creator: https://huggingface.co/aubmindlab/ - Original model: https://huggingface.co/aubmindlab/aragpt2-base/ | Name | Quant method | Size | | ---- | ---- | ---- | | [aragpt2-base.Q2_K.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q2_K.gguf) | Q2_K | 0.09GB | | [aragpt2-base.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.IQ3_XS.gguf) | IQ3_XS | 0.1GB | | [aragpt2-base.IQ3_S.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.IQ3_S.gguf) | IQ3_S | 0.1GB | | [aragpt2-base.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q3_K_S.gguf) | Q3_K_S | 0.1GB | | [aragpt2-base.IQ3_M.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.IQ3_M.gguf) | IQ3_M | 0.1GB | | [aragpt2-base.Q3_K.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q3_K.gguf) | Q3_K | 0.1GB | | [aragpt2-base.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q3_K_M.gguf) | Q3_K_M | 0.1GB | | [aragpt2-base.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q3_K_L.gguf) | Q3_K_L | 0.11GB | | [aragpt2-base.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.IQ4_XS.gguf) | IQ4_XS | 0.11GB | | [aragpt2-base.Q4_0.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q4_0.gguf) | Q4_0 | 0.11GB | | [aragpt2-base.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.IQ4_NL.gguf) | IQ4_NL | 0.11GB | | [aragpt2-base.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q4_K_S.gguf) | Q4_K_S | 0.11GB | | [aragpt2-base.Q4_K.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q4_K.gguf) | Q4_K | 0.12GB | | [aragpt2-base.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q4_K_M.gguf) | Q4_K_M | 0.12GB | | [aragpt2-base.Q4_1.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q4_1.gguf) | Q4_1 | 0.12GB | | [aragpt2-base.Q5_0.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q5_0.gguf) | Q5_0 | 0.13GB | | [aragpt2-base.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q5_K_S.gguf) | Q5_K_S | 0.13GB | | [aragpt2-base.Q5_K.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q5_K.gguf) | Q5_K | 0.13GB | | [aragpt2-base.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q5_K_M.gguf) | Q5_K_M | 0.13GB | | [aragpt2-base.Q5_1.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q5_1.gguf) | Q5_1 | 0.14GB | | [aragpt2-base.Q6_K.gguf](https://huggingface.co/RichardErkhov/aubmindlab_-_aragpt2-base-gguf/blob/main/aragpt2-base.Q6_K.gguf) | Q6_K | 0.15GB | Original model description: --- language: ar datasets: - wikipedia - Osian - 1.5B-Arabic-Corpus - oscar-arabic-unshuffled - Assafir(private) widget: - text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال" - text: "القدس مدينة تاريخية، بناها الكنعانيون في" - text: "كان يا ما كان في قديم الزمان" --- # Arabic GPT2 <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/> You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520) The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository. These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library. GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`). Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. # Usage ## Testing the model using `transformers`: ```python from transformers import GPT2TokenizerFast, pipeline #for base and medium from transformers import GPT2LMHeadModel #for large and mega # pip install arabert from arabert.aragpt2.grover.modeling_gpt2 import GPT2LMHeadModel from arabert.preprocess import ArabertPreprocessor MODEL_NAME='aubmindlab/aragpt2-base' arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME) text="" text_clean = arabert_prep.preprocess(text) model = GPT2LMHeadModel.from_pretrained(MODEL_NAME) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME) generation_pipeline = pipeline("text-generation",model=model,tokenizer=tokenizer) #feel free to try different decoding settings generation_pipeline(text, pad_token_id=tokenizer.eos_token_id, num_beams=10, max_length=200, top_p=0.9, repetition_penalty = 3.0, no_repeat_ngram_size = 3)[0]['generated_text'] ``` ## Finetunning using `transformers`: Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed) ## Finetuning using our code with TF 1.15.4: Create the Training TFRecords: ```bash python create_pretraining_data.py --input_file=<RAW TEXT FILE with documents/article separated by an empty line> --output_file=<OUTPUT TFRecord> --tokenizer_dir=<Directory with the GPT2 Tokenizer files> ``` Finetuning: ```bash python3 run_pretraining.py \\r\n --input_file="gs://<GS_BUCKET>/pretraining_data/*" \\r\n --output_dir="gs://<GS_BUCKET>/pretraining_model/" \\r\n --config_file="config/small_hparams.json" \\r\n --batch_size=128 \\r\n --eval_batch_size=8 \\r\n --num_train_steps= \\r\n --num_warmup_steps= \\r\n --learning_rate= \\r\n --save_checkpoints_steps= \\r\n --max_seq_length=1024 \\r\n --max_eval_steps= \\r\n --optimizer="lamb" \\r\n --iterations_per_loop=5000 \\r\n --keep_checkpoint_max=10 \\r\n --use_tpu=True \\r\n --tpu_name=<TPU NAME> \\r\n --do_train=True \\r\n --do_eval=False ``` # Model Sizes Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params | ---|:---:|:---:|:---:|:---:|:---:|:---: AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB / 135M | AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 | 1.38G/370M | AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M | AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Compute Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5 AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5 AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3 AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9 # Dataset The pretraining data used for the new AraGPT2 model is also used for **AraBERTv2 and AraELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus after we thoroughly filter it, to the dataset used in AraBERTv1 but without the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Disclaimer The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-aragpt2, title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.21", pages = "196--207", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{}
RichardErkhov/aubmindlab_-_aragpt2-base-gguf
null
[ "gguf", "arxiv:2012.15520", "region:us" ]
null
2024-04-17T10:52:15+00:00
[ "2012.15520" ]
[]
TAGS #gguf #arxiv-2012.15520 #region-us
Quantization made by Richard Erkhov. Github Discord Request more models aragpt2-base - GGUF * Model creator: URL * Original model: URL Name: aragpt2-base.Q2\_K.gguf, Quant method: Q2\_K, Size: 0.09GB Name: aragpt2-base.IQ3\_XS.gguf, Quant method: IQ3\_XS, Size: 0.1GB Name: aragpt2-base.IQ3\_S.gguf, Quant method: IQ3\_S, Size: 0.1GB Name: aragpt2-base.Q3\_K\_S.gguf, Quant method: Q3\_K\_S, Size: 0.1GB Name: aragpt2-base.IQ3\_M.gguf, Quant method: IQ3\_M, Size: 0.1GB Name: aragpt2-base.Q3\_K.gguf, Quant method: Q3\_K, Size: 0.1GB Name: aragpt2-base.Q3\_K\_M.gguf, Quant method: Q3\_K\_M, Size: 0.1GB Name: aragpt2-base.Q3\_K\_L.gguf, Quant method: Q3\_K\_L, Size: 0.11GB Name: aragpt2-base.IQ4\_XS.gguf, Quant method: IQ4\_XS, Size: 0.11GB Name: aragpt2-base.Q4\_0.gguf, Quant method: Q4\_0, Size: 0.11GB Name: aragpt2-base.IQ4\_NL.gguf, Quant method: IQ4\_NL, Size: 0.11GB Name: aragpt2-base.Q4\_K\_S.gguf, Quant method: Q4\_K\_S, Size: 0.11GB Name: aragpt2-base.Q4\_K.gguf, Quant method: Q4\_K, Size: 0.12GB Name: aragpt2-base.Q4\_K\_M.gguf, Quant method: Q4\_K\_M, Size: 0.12GB Name: aragpt2-base.Q4\_1.gguf, Quant method: Q4\_1, Size: 0.12GB Name: aragpt2-base.Q5\_0.gguf, Quant method: Q5\_0, Size: 0.13GB Name: aragpt2-base.Q5\_K\_S.gguf, Quant method: Q5\_K\_S, Size: 0.13GB Name: aragpt2-base.Q5\_K.gguf, Quant method: Q5\_K, Size: 0.13GB Name: aragpt2-base.Q5\_K\_M.gguf, Quant method: Q5\_K\_M, Size: 0.13GB Name: aragpt2-base.Q5\_1.gguf, Quant method: Q5\_1, Size: 0.14GB Name: aragpt2-base.Q6\_K.gguf, Quant method: Q6\_K, Size: 0.15GB Original model description: --------------------------- language: ar datasets: * wikipedia * Osian * 1.5B-Arabic-Corpus * oscar-arabic-unshuffled * Assafir(private) widget: * text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال" * text: "القدس مدينة تاريخية، بناها الكنعانيون في" * text: "كان يا ما كان في قديم الزمان" --- Arabic GPT2 =========== <img src="URL width="100" align="left"/> You can find more information in our paper AraGPT2 The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the 'gpt2' folder and can trains models from the minimaxir/gpt-2-simple repository. These models were trained using the 'lamb' optimizer and follow the same architecture as 'gpt2' and are fully compatible with the 'transformers' library. GPT2-large and GPT2-mega were trained using the imcaspar/gpt2-ml library, and follow the 'grover' architecture. You can use the pytorch classes found in 'grover/modeling\_gpt2.py' as a direct replacement for classes in the 'transformers' library (it should support version 'v4.x' from 'transformers'). Both models are trained using the 'adafactor' optimizer, since the 'adam' and 'lamb' optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. Usage ===== Testing the model using 'transformers': --------------------------------------- Finetunning using 'transformers': --------------------------------- Follow the guide linked here Finetuning using our code with TF 1.15.4: ----------------------------------------- Create the Training TFRecords: Finetuning: Model Sizes =========== All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Compute ------- Dataset ======= The pretraining data used for the new AraGPT2 model is also used for AraBERTv2 and AraELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus after we thoroughly filter it, to the dataset used in AraBERTv1 but without the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data Disclaimer ========== The text generated by AraGPT2 is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by AraGPT2 should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. If you used this model please cite us as : ========================================== Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continuous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#gguf #arxiv-2012.15520 #region-us \n" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # baseroberta-finetuned_squadcovid This model is a fine-tuned version of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8426 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.6662 | 0.21 | 1000 | 1.0762 | | 0.7148 | 0.42 | 2000 | 0.9627 | | 0.6548 | 0.64 | 3000 | 0.8933 | | 0.601 | 0.85 | 4000 | 0.8712 | | 0.5623 | 1.06 | 5000 | 0.8938 | | 0.4915 | 1.27 | 6000 | 0.8678 | | 0.4772 | 1.49 | 7000 | 0.8568 | | 0.4709 | 1.7 | 8000 | 0.8479 | | 0.4616 | 1.91 | 9000 | 0.8426 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "FacebookAI/roberta-base", "model-index": [{"name": "baseroberta-finetuned_squadcovid", "results": []}]}
Rahul13/baseroberta-finetuned_squadcovid
null
[ "transformers", "safetensors", "roberta", "question-answering", "generated_from_trainer", "base_model:FacebookAI/roberta-base", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-17T10:55:41+00:00
[]
[]
TAGS #transformers #safetensors #roberta #question-answering #generated_from_trainer #base_model-FacebookAI/roberta-base #license-mit #endpoints_compatible #region-us
baseroberta-finetuned\_squadcovid ================================= This model is a fine-tuned version of FacebookAI/roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.8426 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 2 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #roberta #question-answering #generated_from_trainer #base_model-FacebookAI/roberta-base #license-mit #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Large - Denis Musinguzi This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the Common Voice 14.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2966 - Wer: 0.2467 - Cer: 0.0700 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Cer | Validation Loss | Wer | |:-------------:|:-----:|:----:|:------:|:---------------:|:------:| | 0.6329 | 0.61 | 1600 | 0.0878 | 0.3515 | 0.3385 | | 0.2241 | 1.22 | 3200 | 0.0589 | 0.3045 | 0.2517 | | 0.1618 | 1.82 | 4800 | 0.0707 | 0.2801 | 0.2645 | | 0.1109 | 2.43 | 6400 | 0.0774 | 0.2870 | 0.2580 | | 0.0837 | 3.04 | 8000 | 0.0597 | 0.2900 | 0.2333 | | 0.045 | 3.65 | 9600 | 0.2966 | 0.2467 | 0.0700 | ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1 - Datasets 2.17.0 - Tokenizers 0.15.2
{"language": ["sw"], "license": "apache-2.0", "tags": ["hf-asr-leaderboard", "generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_14_0"], "metrics": ["wer"], "base_model": "openai/whisper-large", "model-index": [{"name": "Whisper Large - Denis Musinguzi", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 14.0", "type": "mozilla-foundation/common_voice_14_0", "config": "lg", "split": "None", "args": "config: sw, split: test"}, "metrics": [{"type": "wer", "value": 0.24669449134992194, "name": "Wer"}]}]}]}
dmusingu/WHISPER-MEDIUM-LUGANDA-ASR-CV-14
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "sw", "dataset:mozilla-foundation/common_voice_14_0", "base_model:openai/whisper-large", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2024-04-17T10:55:45+00:00
[]
[ "sw" ]
TAGS #transformers #safetensors #whisper #automatic-speech-recognition #hf-asr-leaderboard #generated_from_trainer #sw #dataset-mozilla-foundation/common_voice_14_0 #base_model-openai/whisper-large #license-apache-2.0 #model-index #endpoints_compatible #region-us
Whisper Large - Denis Musinguzi =============================== This model is a fine-tuned version of openai/whisper-large on the Common Voice 14.0 dataset. It achieves the following results on the evaluation set: * Loss: 0.2966 * Wer: 0.2467 * Cer: 0.0700 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 1e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 32 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * training\_steps: 10000 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.38.1 * Pytorch 2.2.1 * Datasets 2.17.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 10000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.1\n* Pytorch 2.2.1\n* Datasets 2.17.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #hf-asr-leaderboard #generated_from_trainer #sw #dataset-mozilla-foundation/common_voice_14_0 #base_model-openai/whisper-large #license-apache-2.0 #model-index #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* training\\_steps: 10000\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.1\n* Pytorch 2.2.1\n* Datasets 2.17.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) aragpt2-large - bnb 4bits - Model creator: https://huggingface.co/aubmindlab/ - Original model: https://huggingface.co/aubmindlab/aragpt2-large/ Original model description: --- language: ar license: other license_name: custom license_link: https://github.com/aub-mind/arabert/blob/master/aragpt2/LICENSE datasets: - wikipedia - Osian - arabic-billion-words - oscar - Assafir-private inference: false widget: - text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال" - text: "القدس مدينة تاريخية، بناها الكنعانيون في" - text: "كان يا ما كان في قديم الزمان" --- # Arabic GPT2 <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/> You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520) The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository. These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library. GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`). Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. # NOTE: The model expects the input to be preprocessed using the `arabert` library. if not the model won't be able to generate the correct output. ## Testing the model using `transformers`: The model code is now hosted on HuggingFace so you need to use the `trust_remote_code` flag, and can be used as follows: ```python from transformers import AutoModelForCausalLM, pipeline from arabert.preprocess import ArabertPreprocessor MODEL_NAME='aubmindlab/aragpt2-large' arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME) text="" text_clean = arabert_prep.preprocess(text) model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, trust_remote_code=True) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME) generation_pipeline = pipeline( "text-generation", model=MODEL_NAME, trust_remote_code=True ) #feel free to try different decoding settings generation_pipeline(text, pad_token_id=pipeline.tokenizer.eos_token_id, num_beams=10, max_length=200, top_p=0.9, repetition_penalty = 3.0, no_repeat_ngram_size = 3)[0]['generated_text'] >>> ``` ## Finetunning using `transformers`: Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed) ## Finetuning using our code with TF 1.15.4: Create the Training TFRecords: ```bash python create_pretraining_data.py --input_file=<RAW TEXT FILE with documents/article separated by an empty line> --output_file=<OUTPUT TFRecord> --tokenizer_dir=<Directory with the GPT2 Tokenizer files> ``` Finetuning: ```bash python3 run_pretraining.py \ --input_file="gs://<GS_BUCKET>/pretraining_data/*" \ --output_dir="gs://<GS_BUCKET>/pretraining_model/" \ --config_file="config/small_hparams.json" \ --batch_size=128 \ --eval_batch_size=8 \ --num_train_steps= \ --num_warmup_steps= \ --learning_rate= \ --save_checkpoints_steps= \ --max_seq_length=1024 \ --max_eval_steps= \ --optimizer="lamb" \ --iterations_per_loop=5000 \ --keep_checkpoint_max=10 \ --use_tpu=True \ --tpu_name=<TPU NAME> \ --do_train=True \ --do_eval=False ``` # Model Sizes Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params | ---|:---:|:---:|:---:|:---:|:---:|:---: AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB/135M | AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 |1.38G/370M | AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M | AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Compute For Dataset Source see the [Dataset Section](#Dataset) Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5 AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5 AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3 AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9 # Dataset The pretraining data used for the new AraBERT model is also used for **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Disclaimer The text generated by GPT2 Arabic is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by GPT2 Arabic should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-aragpt2, title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.21", pages = "196--207", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{}
RichardErkhov/aubmindlab_-_aragpt2-large-4bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "custom_code", "arxiv:2012.15520", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-17T10:55:45+00:00
[ "2012.15520" ]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #custom_code #arxiv-2012.15520 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models aragpt2-large - bnb 4bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: ar license: other license\_name: custom license\_link: URL datasets: * wikipedia * Osian * arabic-billion-words * oscar * Assafir-private inference: false widget: * text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال" * text: "القدس مدينة تاريخية، بناها الكنعانيون في" * text: "كان يا ما كان في قديم الزمان" --- Arabic GPT2 =========== <img src="URL width="100" align="left"/> You can find more information in our paper AraGPT2 The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the 'gpt2' folder and can trains models from the minimaxir/gpt-2-simple repository. These models were trained using the 'lamb' optimizer and follow the same architecture as 'gpt2' and are fully compatible with the 'transformers' library. GPT2-large and GPT2-mega were trained using the imcaspar/gpt2-ml library, and follow the 'grover' architecture. You can use the pytorch classes found in 'grover/modeling\_gpt2.py' as a direct replacement for classes in the 'transformers' library (it should support version 'v4.x' from 'transformers'). Both models are trained using the 'adafactor' optimizer, since the 'adam' and 'lamb' optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. NOTE: The model expects the input to be preprocessed using the 'arabert' library. ================================================================================= if not the model won't be able to generate the correct output. Testing the model using 'transformers': --------------------------------------- The model code is now hosted on HuggingFace so you need to use the 'trust\_remote\_code' flag, and can be used as follows: Finetunning using 'transformers': --------------------------------- Follow the guide linked here Finetuning using our code with TF 1.15.4: ----------------------------------------- Create the Training TFRecords: Finetuning: Model Sizes =========== All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Compute ------- For Dataset Source see the Dataset Section Dataset ======= The pretraining data used for the new AraBERT model is also used for GPT2 and ELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data Disclaimer ========== The text generated by GPT2 Arabic is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by GPT2 Arabic should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. If you used this model please cite us as : ========================================== Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continuous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #custom_code #arxiv-2012.15520 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n" ]
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) aragpt2-large - bnb 8bits - Model creator: https://huggingface.co/aubmindlab/ - Original model: https://huggingface.co/aubmindlab/aragpt2-large/ Original model description: --- language: ar license: other license_name: custom license_link: https://github.com/aub-mind/arabert/blob/master/aragpt2/LICENSE datasets: - wikipedia - Osian - arabic-billion-words - oscar - Assafir-private inference: false widget: - text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال" - text: "القدس مدينة تاريخية، بناها الكنعانيون في" - text: "كان يا ما كان في قديم الزمان" --- # Arabic GPT2 <img src="https://raw.githubusercontent.com/aub-mind/arabert/master/AraGPT2.png" width="100" align="left"/> You can find more information in our paper [AraGPT2](https://arxiv.org/abs/2012.15520) The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the `gpt2` folder and can trains models from the [minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple) repository. These models were trained using the `lamb` optimizer and follow the same architecture as `gpt2` and are fully compatible with the `transformers` library. GPT2-large and GPT2-mega were trained using the [imcaspar/gpt2-ml](https://github.com/imcaspar/gpt2-ml/) library, and follow the `grover` architecture. You can use the pytorch classes found in `grover/modeling_gpt2.py` as a direct replacement for classes in the `transformers` library (it should support version `v4.x` from `transformers`). Both models are trained using the `adafactor` optimizer, since the `adam` and `lamb` optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. # NOTE: The model expects the input to be preprocessed using the `arabert` library. if not the model won't be able to generate the correct output. ## Testing the model using `transformers`: The model code is now hosted on HuggingFace so you need to use the `trust_remote_code` flag, and can be used as follows: ```python from transformers import AutoModelForCausalLM, pipeline from arabert.preprocess import ArabertPreprocessor MODEL_NAME='aubmindlab/aragpt2-large' arabert_prep = ArabertPreprocessor(model_name=MODEL_NAME) text="" text_clean = arabert_prep.preprocess(text) model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, trust_remote_code=True) tokenizer = GPT2TokenizerFast.from_pretrained(MODEL_NAME) generation_pipeline = pipeline( "text-generation", model=MODEL_NAME, trust_remote_code=True ) #feel free to try different decoding settings generation_pipeline(text, pad_token_id=pipeline.tokenizer.eos_token_id, num_beams=10, max_length=200, top_p=0.9, repetition_penalty = 3.0, no_repeat_ngram_size = 3)[0]['generated_text'] >>> ``` ## Finetunning using `transformers`: Follow the guide linked [here](https://towardsdatascience.com/fine-tuning-gpt2-on-colab-gpu-for-free-340468c92ed) ## Finetuning using our code with TF 1.15.4: Create the Training TFRecords: ```bash python create_pretraining_data.py --input_file=<RAW TEXT FILE with documents/article separated by an empty line> --output_file=<OUTPUT TFRecord> --tokenizer_dir=<Directory with the GPT2 Tokenizer files> ``` Finetuning: ```bash python3 run_pretraining.py \ --input_file="gs://<GS_BUCKET>/pretraining_data/*" \ --output_dir="gs://<GS_BUCKET>/pretraining_model/" \ --config_file="config/small_hparams.json" \ --batch_size=128 \ --eval_batch_size=8 \ --num_train_steps= \ --num_warmup_steps= \ --learning_rate= \ --save_checkpoints_steps= \ --max_seq_length=1024 \ --max_eval_steps= \ --optimizer="lamb" \ --iterations_per_loop=5000 \ --keep_checkpoint_max=10 \ --use_tpu=True \ --tpu_name=<TPU NAME> \ --do_train=True \ --do_eval=False ``` # Model Sizes Model | Optimizer | Context size | Embedding Size | Num of heads | Num of layers | Model Size / Num of Params | ---|:---:|:---:|:---:|:---:|:---:|:---: AraGPT2-base | `lamb` | 1024 | 768 | 12 | 12 | 527MB/135M | AraGPT2-medium | `lamb` | 1024 | 1024 | 16 | 24 |1.38G/370M | AraGPT2-large | `adafactor` | 1024 | 1280 | 20 | 36 | 2.98GB/792M | AraGPT2-mega | `adafactor` | 1024 | 1536 | 25 | 48 | 5.5GB/1.46B | All models are available in the `HuggingFace` model page under the [aubmindlab](https://huggingface.co/aubmindlab/) name. Checkpoints are available in PyTorch, TF2 and TF1 formats. ## Compute For Dataset Source see the [Dataset Section](#Dataset) Model | Hardware | num of examples (seq len = 1024) | Batch Size | Num of Steps | Time (in days) ---|:---:|:---:|:---:|:---:|:---: AraGPT2-base | TPUv3-128 | 9.7M | 1792 | 125K | 1.5 AraGPT2-medium | TPUv3-8 | 9.7M | 1152 | 85K | 1.5 AraGPT2-large | TPUv3-128 | 9.7M | 256 | 220k | 3 AraGPT2-mega | TPUv3-128 | 9.7M | 256 | 780K | 9 # Dataset The pretraining data used for the new AraBERT model is also used for **GPT2 and ELECTRA**. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: - OSCAR unshuffled and filtered. - [Arabic Wikipedia dump](https://archive.org/details/arwiki-20190201) from 2020/09/01 - [The 1.5B words Arabic Corpus](https://www.semanticscholar.org/paper/1.5-billion-words-Arabic-Corpus-El-Khair/f3eeef4afb81223df96575adadf808fe7fe440b4) - [The OSIAN Corpus](https://www.aclweb.org/anthology/W19-4619) - Assafir news articles. Huge thank you for Assafir for giving us the data # Disclaimer The text generated by GPT2 Arabic is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by GPT2 Arabic should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. # If you used this model please cite us as : ``` @inproceedings{antoun-etal-2021-aragpt2, title = "{A}ra{GPT}2: Pre-Trained Transformer for {A}rabic Language Generation", author = "Antoun, Wissam and Baly, Fady and Hajj, Hazem", booktitle = "Proceedings of the Sixth Arabic Natural Language Processing Workshop", month = apr, year = "2021", address = "Kyiv, Ukraine (Virtual)", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.wanlp-1.21", pages = "196--207", } ``` # Acknowledgments Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the [AUB MIND Lab](https://sites.aub.edu.lb/mindlab/) Members for the continuous support. Also thanks to [Yakshof](https://www.yakshof.com/#/) and Assafir for data and storage access. Another thanks for Habib Rahal (https://www.behance.net/rahalhabib), for putting a face to AraBERT. # Contacts **Wissam Antoun**: [Linkedin](https://www.linkedin.com/in/wissam-antoun-622142b4/) | [Twitter](https://twitter.com/wissam_antoun) | [Github](https://github.com/WissamAntoun) | <[email protected]> | <[email protected]> **Fady Baly**: [Linkedin](https://www.linkedin.com/in/fadybaly/) | [Twitter](https://twitter.com/fadybaly) | [Github](https://github.com/fadybaly) | <[email protected]> | <[email protected]>
{}
RichardErkhov/aubmindlab_-_aragpt2-large-8bits
null
[ "transformers", "safetensors", "gpt2", "text-generation", "custom_code", "arxiv:2012.15520", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-17T10:56:45+00:00
[ "2012.15520" ]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #custom_code #arxiv-2012.15520 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
Quantization made by Richard Erkhov. Github Discord Request more models aragpt2-large - bnb 8bits * Model creator: URL * Original model: URL Original model description: --------------------------- language: ar license: other license\_name: custom license\_link: URL datasets: * wikipedia * Osian * arabic-billion-words * oscar * Assafir-private inference: false widget: * text: "يحكى أن مزارعا مخادعا قام ببيع بئر الماء الموجود في أرضه لجاره مقابل مبلغ كبير من المال" * text: "القدس مدينة تاريخية، بناها الكنعانيون في" * text: "كان يا ما كان في قديم الزمان" --- Arabic GPT2 =========== <img src="URL width="100" align="left"/> You can find more information in our paper AraGPT2 The code in this repository was used to train all GPT2 variants. The code support training and fine-tuning GPT2 on GPUs and TPUs via the TPUEstimator API. GPT2-base and medium uses the code from the 'gpt2' folder and can trains models from the minimaxir/gpt-2-simple repository. These models were trained using the 'lamb' optimizer and follow the same architecture as 'gpt2' and are fully compatible with the 'transformers' library. GPT2-large and GPT2-mega were trained using the imcaspar/gpt2-ml library, and follow the 'grover' architecture. You can use the pytorch classes found in 'grover/modeling\_gpt2.py' as a direct replacement for classes in the 'transformers' library (it should support version 'v4.x' from 'transformers'). Both models are trained using the 'adafactor' optimizer, since the 'adam' and 'lamb' optimizer use too much memory causing the model to not even fit 1 batch on a TPU core. AraGPT2 is trained on the same large Arabic Dataset as AraBERTv2. NOTE: The model expects the input to be preprocessed using the 'arabert' library. ================================================================================= if not the model won't be able to generate the correct output. Testing the model using 'transformers': --------------------------------------- The model code is now hosted on HuggingFace so you need to use the 'trust\_remote\_code' flag, and can be used as follows: Finetunning using 'transformers': --------------------------------- Follow the guide linked here Finetuning using our code with TF 1.15.4: ----------------------------------------- Create the Training TFRecords: Finetuning: Model Sizes =========== All models are available in the 'HuggingFace' model page under the aubmindlab name. Checkpoints are available in PyTorch, TF2 and TF1 formats. Compute ------- For Dataset Source see the Dataset Section Dataset ======= The pretraining data used for the new AraBERT model is also used for GPT2 and ELECTRA. The dataset consists of 77GB or 200,095,961 lines or 8,655,948,860 words or 82,232,988,358 chars (before applying Farasa Segmentation) For the new dataset we added the unshuffled OSCAR corpus, after we thoroughly filter it, to the previous dataset used in AraBERTv1 but with out the websites that we previously crawled: * OSCAR unshuffled and filtered. * Arabic Wikipedia dump from 2020/09/01 * The 1.5B words Arabic Corpus * The OSIAN Corpus * Assafir news articles. Huge thank you for Assafir for giving us the data Disclaimer ========== The text generated by GPT2 Arabic is automatically generated by a neural network model trained on a large amount of texts, which does not represent the authors' or their institutes' official attitudes and preferences. The text generated by GPT2 Arabic should only be used for research and scientific purposes. If it infringes on your rights and interests or violates social morality, please do not propagate it. If you used this model please cite us as : ========================================== Acknowledgments =============== Thanks to TensorFlow Research Cloud (TFRC) for the free access to Cloud TPUs, couldn't have done it without this program, and to the AUB MIND Lab Members for the continuous support. Also thanks to Yakshof and Assafir for data and storage access. Another thanks for Habib Rahal (URL for putting a face to AraBERT. Contacts ======== Wissam Antoun: Linkedin | Twitter | Github | [wfa07@URL](mailto:wfa07@URL) | [URL@URL](mailto:URL@URL) Fady Baly: Linkedin | Twitter | Github | [fgb06@URL](mailto:fgb06@URL) | [URL@URL](mailto:URL@URL)
[]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #custom_code #arxiv-2012.15520 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Noodlz/WizardLaker-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/WizardLaker-7B-GGUF/resolve/main/WizardLaker-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "Noodlz/WizardLaker-7B", "quantized_by": "mradermacher"}
mradermacher/WizardLaker-7B-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Noodlz/WizardLaker-7B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-17T10:56:45+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mergekit #merge #en #base_model-Noodlz/WizardLaker-7B #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #mergekit #merge #en #base_model-Noodlz/WizardLaker-7B #license-apache-2.0 #endpoints_compatible #region-us \n" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
tgritsaev/my-awesome-model
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T11:00:23+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
Check G-reen/EXPERIMENT-ORPO-m7b2-2-merged (https://huggingface.co/G-reen/EXPERIMENT-ORPO-m7b2-2-merged) for details.
{}
G-reen/EXPERIMENT-ORPO-m7b2-2-lora
null
[ "safetensors", "region:us" ]
null
2024-04-17T11:00:33+00:00
[]
[]
TAGS #safetensors #region-us
Check G-reen/EXPERIMENT-ORPO-m7b2-2-merged (URL for details.
[]
[ "TAGS\n#safetensors #region-us \n" ]
null
fastai
# Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
{"tags": ["fastai"]}
lauragordo/chest_xray
null
[ "fastai", "has_space", "region:us" ]
null
2024-04-17T11:00:37+00:00
[]
[]
TAGS #fastai #has_space #region-us
# Amazing! Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the documentation here)! 2. Create a demo in Gradio or Streamlit using Spaces (documentation here). 3. Join the fastai community on the Fastai Discord! Greetings fellow fastlearner ! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
[ "# Amazing!\n\n Congratulations on hosting your fastai model on the Hugging Face Hub!", "# Some next steps\n1. Fill out this model card with more information (see the template below and the documentation here)!\n\n2. Create a demo in Gradio or Streamlit using Spaces (documentation here).\n\n3. Join the fastai community on the Fastai Discord!\n\nGreetings fellow fastlearner ! Don't forget to delete this content from your model card.\n\n\n---", "# Model card", "## Model description\nMore information needed", "## Intended uses & limitations\nMore information needed", "## Training and evaluation data\nMore information needed" ]
[ "TAGS\n#fastai #has_space #region-us \n", "# Amazing!\n\n Congratulations on hosting your fastai model on the Hugging Face Hub!", "# Some next steps\n1. Fill out this model card with more information (see the template below and the documentation here)!\n\n2. Create a demo in Gradio or Streamlit using Spaces (documentation here).\n\n3. Join the fastai community on the Fastai Discord!\n\nGreetings fellow fastlearner ! Don't forget to delete this content from your model card.\n\n\n---", "# Model card", "## Model description\nMore information needed", "## Intended uses & limitations\nMore information needed", "## Training and evaluation data\nMore information needed" ]
text-generation
transformers
*This model was trained as part of a series of experiments testing the performance of pure DPO vs SFT vs ORPO, all supported by Unsloth/Huggingface TRL.* **Benchmarks** Average 59.54 ARC 59.64 HellaSwag 82.44 MMLU 62.25 TruthfulQA 40.09 Winogrande 78.37 GSM8K 34.42 **Training Details** Duration: ~9 hours on one Kaggle T4 with Unsloth Model: https://huggingface.co/unsloth/mistral-7b-v0.2-bnb-4bit Dataset: https://huggingface.co/datasets/argilla/dpo-mix-7k Rank: 8 Alpha: 16 Learning rate: 5e-6 Beta: 0.1 Batch size: 8 Epochs: 1 Learning rate scheduler: Linear Prompt Format: ChatML ``` <|im_start|>system You are a helpful assistant.<|im_end|> <|im_start|>user Why is the sky blue?<|im_end|> <|im_start|>assistant ``` **WanDB Reports** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a5c0e82823ba72ed2cee7d/Mn02CupOCn_PkcGF4wu3P.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a5c0e82823ba72ed2cee7d/6fbVBnvDgS9UelUEsu6K9.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a5c0e82823ba72ed2cee7d/9RKY1qgr5pcJGQMWz_-QF.png) [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"license": "apache-2.0"}
G-reen/EXPERIMENT-ORPO-m7b2-2-merged
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-17T11:00:59+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
*This model was trained as part of a series of experiments testing the performance of pure DPO vs SFT vs ORPO, all supported by Unsloth/Huggingface TRL.* Benchmarks Average 59.54 ARC 59.64 HellaSwag 82.44 MMLU 62.25 TruthfulQA 40.09 Winogrande 78.37 GSM8K 34.42 Training Details Duration: ~9 hours on one Kaggle T4 with Unsloth Model: URL Dataset: URL Rank: 8 Alpha: 16 Learning rate: 5e-6 Beta: 0.1 Batch size: 8 Epochs: 1 Learning rate scheduler: Linear Prompt Format: ChatML WanDB Reports !image/png !image/png !image/png <img src="URL width="200"/>
[]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: mrbesher/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget"]}
mrbesher/ppo-SnowballTarget
null
[ "ml-agents", "tensorboard", "onnx", "SnowballTarget", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-SnowballTarget", "region:us" ]
null
2024-04-17T11:01:26+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us
# ppo Agent playing SnowballTarget This is a trained model of a ppo agent playing SnowballTarget using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: mrbesher/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: mrbesher/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #SnowballTarget #deep-reinforcement-learning #reinforcement-learning #ML-Agents-SnowballTarget #region-us \n", "# ppo Agent playing SnowballTarget\n This is a trained model of a ppo agent playing SnowballTarget\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: mrbesher/ppo-SnowballTarget\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
omezzinemariem/mistral-text-to-RULE_merged
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T11:03:02+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="shahidaakhtar/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
shahidaakhtar/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-17T11:03:47+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
null
null
# M7Multi_verse_model-7B M7Multi_verse_model-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [MTSAIR/multi_verse_model](https://huggingface.co/MTSAIR/multi_verse_model) ## 🧩 Configuration ```yaml models: - model: liminerity/M7-7b # No parameters necessary for base model - model: MTSAIR/multi_verse_model parameters: density: 0.53 weight: 0.6 merge_method: dare_ties base_model: liminerity/M7-7b parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/M7Multi_verse_model-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"], "base_model": ["MTSAIR/multi_verse_model"]}
automerger/M7Multi_verse_model-7B
null
[ "merge", "mergekit", "lazymergekit", "automerger", "base_model:MTSAIR/multi_verse_model", "license:apache-2.0", "region:us" ]
null
2024-04-17T11:04:23+00:00
[]
[]
TAGS #merge #mergekit #lazymergekit #automerger #base_model-MTSAIR/multi_verse_model #license-apache-2.0 #region-us
# M7Multi_verse_model-7B M7Multi_verse_model-7B is an automated merge created by Maxime Labonne using the following configuration. * MTSAIR/multi_verse_model ## Configuration ## Usage
[ "# M7Multi_verse_model-7B\n\nM7Multi_verse_model-7B is an automated merge created by Maxime Labonne using the following configuration.\n* MTSAIR/multi_verse_model", "## Configuration", "## Usage" ]
[ "TAGS\n#merge #mergekit #lazymergekit #automerger #base_model-MTSAIR/multi_verse_model #license-apache-2.0 #region-us \n", "# M7Multi_verse_model-7B\n\nM7Multi_verse_model-7B is an automated merge created by Maxime Labonne using the following configuration.\n* MTSAIR/multi_verse_model", "## Configuration", "## Usage" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="shahidaakhtar/shsh", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "shsh", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.52 +/- 2.67", "name": "mean_reward", "verified": false}]}]}]}
shahidaakhtar/shsh
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-17T11:05:09+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
text-classification
bertopic
# impf_ukrain_postcov_all_sns_topics_umap_lok_hdbscan_lok_ctfidf_seed_14_prob This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. ## Usage To use this model, please install BERTopic: ``` pip install -U bertopic ``` You can use the model as follows: ```python from bertopic import BERTopic topic_model = BERTopic.load("RolMax/impf_ukrain_postcov_all_sns_topics_umap_lok_hdbscan_lok_ctfidf_seed_14_prob") topic_model.get_topic_info() ``` ## Topic overview * Number of topics: 830 * Number of training documents: 91393 <details> <summary>Click here for an overview of all topics.</summary> | Topic ID | Topic Keywords | Topic Frequency | Label | |----------|----------------|-----------------|-------| | -1 | denen - freie - teil - website - stellt | 20 | -1_denen_freie_teil_website | | 0 | heute zornig quasi - gut geht - entwürdigend heute - nimm endlich raus - hinten tim | 48605 | 0_heute zornig quasi_gut geht_entwürdigend heute_nimm endlich raus | | 1 | migrationsforscher gerald knaus - migrationskrise - un organisation migration - migrationsforscher - migrationsforscher gerald | 1114 | 1_migrationsforscher gerald knaus_migrationskrise_un organisation migration_migrationsforscher | | 2 | verheerenden erdbeben türkei - schweren erdbeben türkisch - erdbeben türkei syrien - türkei erdbeben - türkei syrien | 699 | 2_verheerenden erdbeben türkei_schweren erdbeben türkisch_erdbeben türkei syrien_türkei erdbeben | | 3 | warum regt deutschland - glauben deutschland - deutschlands allein stark - regt deutschland keinerlei - deutschlands enthüllt | 659 | 3_warum regt deutschland_glauben deutschland_deutschlands allein stark_regt deutschland keinerlei | | 4 | deutsch german politik - german politik spezial - german tägliche politische - deutsch german tägliche - german politik | 484 | 4_deutsch german politik_german politik spezial_german tägliche politische_deutsch german tägliche | | 5 | us biolabore ukraine - biolabs ukraine - ukrainischen gesundheitssystems - biolaboren ukraine - biolabors ukraine | 444 | 5_us biolabore ukraine_biolabs ukraine_ukrainischen gesundheitssystems_biolaboren ukraine | | 6 | weihnachtsbotschaft - weihnachtsruhe - weihnachtszeit - weihnachtsstimmung - weihnachtsmarkt | 421 | 6_weihnachtsbotschaft_weihnachtsruhe_weihnachtszeit_weihnachtsstimmung | | 7 | beim youtube vice - beim youtube - video plattform genommen - directedevolution video - video teilen | 398 | 7_beim youtube vice_beim youtube_video plattform genommen_directedevolution video | | 8 | embargo russisches erdöl - russischer gaslieferungen - gaslieferungen russland - russischem gas - russisches gas | 374 | 8_embargo russisches erdöl_russischer gaslieferungen_gaslieferungen russland_russischem gas | | 9 | publizisten freien medien - scheinen grundsätze journalistischen - grundsätze journalistischen sorgfaltspflicht - grundsätze journalistischen - nachricht meisten publizisten | 368 | 9_publizisten freien medien_scheinen grundsätze journalistischen_grundsätze journalistischen sorgfaltspflicht_grundsätze journalistischen | | 10 | warum gibt krieg - gibt krieg lernen - krieg lernen menschen - krieg lernen - krieg führen krieg | 349 | 10_warum gibt krieg_gibt krieg lernen_krieg lernen menschen_krieg lernen | | 11 | ukrainekrieg frontverlauf - ukrainischen streitkräfte - ukrainische armee - russischen truppen - russische truppen | 340 | 11_ukrainekrieg frontverlauf_ukrainischen streitkräfte_ukrainische armee_russischen truppen | | 12 | donald trump lügenpresse - trump lügenpresse - trump erwähnt - trumps - präsident trump | 339 | 12_donald trump lügenpresse_trump lügenpresse_trump erwähnt_trumps | | 13 | ablehnenden stellungnahmen blick - parlamentariern zahl ablehnenden - zahl ablehnenden stellungnahmen - eindruck parlamentariern - gedacht legitimiert sagt | 332 | 13_ablehnenden stellungnahmen blick_parlamentariern zahl ablehnenden_zahl ablehnenden stellungnahmen_eindruck parlamentariern | | 14 | anschlägen nord stream - journalist seymour hersh - journalisten seymour hersh - pipeline ausschaltete - pipelines nord | 326 | 14_anschlägen nord stream_journalist seymour hersh_journalisten seymour hersh_pipeline ausschaltete | | 15 | böller demonstranten israelfahne - wegen israelfahne araber - wegen israelfahne - angriff iran - warum israel | 322 | 15_böller demonstranten israelfahne_wegen israelfahne araber_wegen israelfahne_angriff iran | | 16 | österreichische regierung gibt - österreicher zeiten eigentlich - menschen österreich - barbarei österreichische regierung - österreicher zeiten | 262 | 16_österreichische regierung gibt_österreicher zeiten eigentlich_menschen österreich_barbarei österreichische regierung | | 17 | 18 uhr rathausplatz - uhr rathausplatz - 18 uhr marienplatz - uhr marienplatz - uhr rathausplatz bad | 261 | 17_18 uhr rathausplatz_uhr rathausplatz_18 uhr marienplatz_uhr marienplatz | | 18 | zellerzeitung de - que el - schweden sz - esta - schweiz sage | 255 | 18_zellerzeitung de_que el_schweden sz_esta | | 19 | findet telegram gruppe - welt politische geoengineering - tägliche politische geoengineering - politische geoengineering - twitter causa | 254 | 19_findet telegram gruppe_welt politische geoengineering_tägliche politische geoengineering_politische geoengineering | | 20 | menschen protestieren - proteste corona maßnahmen - protesten - proteste corona politik - demonstrationen corona maßnahmen | 236 | 20_menschen protestieren_proteste corona maßnahmen_protesten_proteste corona politik | | 21 | präsident putin - путин - putin gerade - vladimir putin - warum putin | 229 | 21_präsident putin_путин_putin gerade_vladimir putin | | 22 | israelisches gesundheitsministerium masken - gesundheitsministerium masken erzieherischen - tragen masken erzieherische - gesundheitsministerium masken - studie tragen masken | 229 | 22_israelisches gesundheitsministerium masken_gesundheitsministerium masken erzieherischen_tragen masken erzieherische_gesundheitsministerium masken | | 23 | geheime bunker unterirdische - geheime bunker - bunker unterirdische städte - verheimlichen tiefergehende - analyse unterirdischen | 226 | 23_geheime bunker unterirdische_geheime bunker_bunker unterirdische städte_verheimlichen tiefergehende | | 24 | mittelstand inflation steigende - inflation steigende zinsen - inflation hoch seit - hohe inflation - inflation hoch | 223 | 24_mittelstand inflation steigende_inflation steigende zinsen_inflation hoch seit_hohe inflation | | 25 | politikern altparteien bedrohung - setzt sündenbockpolitik - politischen laufhaus namens - politischen laufhaus - gesichtern personen politischen | 222 | 25_politikern altparteien bedrohung_setzt sündenbockpolitik_politischen laufhaus namens_politischen laufhaus | | 26 | risikopotentials gegenüber impfungen - impfungen gemeldeten nebenwirkungen - impfnebenwirkungen nebenwirkungen forschern - gegenüber impfungen gemeldeten - verdachtsfälle impfnebenwirkungen nebenwirkungen | 221 | 26_risikopotentials gegenüber impfungen_impfungen gemeldeten nebenwirkungen_impfnebenwirkungen nebenwirkungen forschern_gegenüber impfungen gemeldeten | | 27 | abonnieren bitte - abonnieren bitte telegram - gerne telegram facebook - seite facebook gruppe - facebook gruppe telegram | 207 | 27_abonnieren bitte_abonnieren bitte telegram_gerne telegram facebook_seite facebook gruppe | | 28 | bregenz heute strasse - pöchlarn heute - folgt pöchlarn heute - pöchlarn heute strasse - gestern strasse | 190 | 28_bregenz heute strasse_pöchlarn heute_folgt pöchlarn heute_pöchlarn heute strasse | | 29 | wahrheiten lauterbachs ex - irre lauterbach lügt - lauterbach beim lügen - lauterbach lügt trotz - lauterbach lässt | 186 | 29_wahrheiten lauterbachs ex_irre lauterbach lügt_lauterbach beim lügen_lauterbach lügt trotz | | 30 | videos across censorship - truth news networks - across censorship resistant - censorship resistant internet - censorship resistant | 181 | 30_videos across censorship_truth news networks_across censorship resistant_censorship resistant internet | | 31 | germany - report english tägliche - greetings germany - exposed english tägliche - personal greetings germany | 175 | 31_germany_report english tägliche_greetings germany_exposed english tägliche | | 32 | krieginderukraine ukrainischerkrieg ukrainischewahrheit - ukrainischerkrieg ukrainischewahrheit - ukrainekrieg zeigt richtung - gelogen ukrainekrieg zeigt - gelogen ukrainekrieg | 174 | 32_krieginderukraine ukrainischerkrieg ukrainischewahrheit_ukrainischerkrieg ukrainischewahrheit_ukrainekrieg zeigt richtung_gelogen ukrainekrieg zeigt | | 33 | berlin wahl dreht - wiederholungswahl abgeordnetenhauses berlin - berliner wahl - berlin wahl - wahl berlin | 170 | 33_berlin wahl dreht_wiederholungswahl abgeordnetenhauses berlin_berliner wahl_berlin wahl | | 34 | kraftstoffpreise zapfsäulen setzen - kraftstoffpreise zapfsäulen - erklärung kraftstoffpreise zapfsäulen - erklärung kraftstoffpreise - kraftstoffpreise | 166 | 34_kraftstoffpreise zapfsäulen setzen_kraftstoffpreise zapfsäulen_erklärung kraftstoffpreise zapfsäulen_erklärung kraftstoffpreise | | 35 | sollen corona maßnahmen - corona maßnahmen beschlossen - maßnahmen beschlossen spätestens - sollen maßnahmen gelockert - corona verordnung | 165 | 35_sollen corona maßnahmen_corona maßnahmen beschlossen_maßnahmen beschlossen spätestens_sollen maßnahmen gelockert | | 36 | kraft bitterstoffe hilft - bitterstoffe hilft stoffwechsel - wichtigen vitamin energielieferanten - entschlacken abnehmen winter - abnehmen fällt leichter | 165 | 36_kraft bitterstoffe hilft_bitterstoffe hilft stoffwechsel_wichtigen vitamin energielieferanten_entschlacken abnehmen winter | | 37 | bereich herstellung lebensmitteln - versorgung lebensmitteln - sachen lebensmittel versorgung - düngemittelproduktion - ernähren | 158 | 37_bereich herstellung lebensmitteln_versorgung lebensmitteln_sachen lebensmittel versorgung_düngemittelproduktion | | 38 | china russland strategische - verhältnis russland china - sturm china russland - russland china - russland china seite | 147 | 38_china russland strategische_verhältnis russland china_sturm china russland_russland china | | 39 | arena pharmaceuticals kaufen - us pharmakonzern pfizer - pharmakonzern pfizer - arena pharmaceuticals - pfizer milliarden dollar | 143 | 39_arena pharmaceuticals kaufen_us pharmakonzern pfizer_pharmakonzern pfizer_arena pharmaceuticals | | 40 | ignazbearth 03 potsdam - ignazbearth 16 03 - ignazbearth 03 - ignazbearth 18 03 - paypal ignazbearth 03 | 142 | 40_ignazbearth 03 potsdam_ignazbearth 16 03_ignazbearth 03_ignazbearth 18 03 | | 41 | trinkwasserverunreinigung - wasserversorgung - bakterien trinkwasser betroffen - wasser qualitätsgemeinschaft - trinkwasserversorgung | 142 | 41_trinkwasserverunreinigung_wasserversorgung_bakterien trinkwasser betroffen_wasser qualitätsgemeinschaft | | 42 | wurde russland dämonisiert - schuld amerikaner russe - westfalen anti russische - anti russische propaganda - russland verbündet | 142 | 42_wurde russland dämonisiert_schuld amerikaner russe_westfalen anti russische_anti russische propaganda | | 43 | corona experten staatsgeheimnis - corona experten - schulz corona experten - corona generäle weiteres - einführung corona generäle | 141 | 43_corona experten staatsgeheimnis_corona experten_schulz corona experten_corona generäle weiteres | | 44 | positiv coronavirus - omikron variante coronavirus - coronavirus - mehr gefährlicher grippe - positiv coronavirus getestet | 141 | 44_positiv coronavirus_omikron variante coronavirus_coronavirus_mehr gefährlicher grippe | | 45 | anti russland sanktionen - sanktionen russland - russland sanktionen - sanktionen treffen russland - möglichkeiten finanzsanktionen russland | 137 | 45_anti russland sanktionen_sanktionen russland_russland sanktionen_sanktionen treffen russland | | 46 | schönes foto - wunderschön - schöne naturfotos - liebe lichtgrüße - herzliche grüße | 133 | 46_schönes foto_wunderschön_schöne naturfotos_liebe lichtgrüße | | 47 | ausgelastet sagte russische - sagte russische vize - umstrittene gaspipeline - russische vize regierungschef - gasleitung | 133 | 47_ausgelastet sagte russische_sagte russische vize_umstrittene gaspipeline_russische vize regierungschef | | 48 | kanal twitter stellt - apolut kanal twitter - twitterfiles reihe verpassen - kritische twitter accs - twitter twittertrends | 132 | 48_kanal twitter stellt_apolut kanal twitter_twitterfiles reihe verpassen_kritische twitter accs | | 49 | insekten lebensmittel us - wahrheit insekten lebensmittel - schockierende wahrheit insekten - beimischung insekten lebensmitteln - insekten lebensmitteln | 128 | 49_insekten lebensmittel us_wahrheit insekten lebensmittel_schockierende wahrheit insekten_beimischung insekten lebensmitteln | | 50 | impfung vorerkrankten kindern - covid impfung erhalten - ständigen impfkommission derzeit - impfkommission derzeit gegeben - impfkommission derzeit | 125 | 50_impfung vorerkrankten kindern_covid impfung erhalten_ständigen impfkommission derzeit_impfkommission derzeit gegeben | | 51 | zerstörten deutschland ende - boden zerstörten deutschland - zerstörten deutschland - ukraine zerstückelung ruin - krieges ukraine zerstückelung | 125 | 51_zerstörten deutschland ende_boden zerstörten deutschland_zerstörten deutschland_ukraine zerstückelung ruin | | 52 | bundesamt justiz telegram - angehen betreiber telegram - telegram vorgehen - sagt telegram kampf - betreiber telegram | 121 | 52_bundesamt justiz telegram_angehen betreiber telegram_telegram vorgehen_sagt telegram kampf | | 53 | mrna impfstoffe zellkern - impfstoff menschlichen leberzellen - impfstoffe zellkern - wonach impfstoff mrna - impfstoffe zellkern eindringen | 120 | 53_mrna impfstoffe zellkern_impfstoff menschlichen leberzellen_impfstoffe zellkern_wonach impfstoff mrna | | 54 | chinesischen ballon - chinesischen ballons - mutmaßlichen chinesischen spionageballon - luftballons - chinese balloon | 119 | 54_chinesischen ballon_chinesischen ballons_mutmaßlichen chinesischen spionageballon_luftballons | | 55 | exporte russland - ukraine kornkammer europas - exporte russland ukraine - grund rekordhohen erdgaspreise - ukraine kornkammer | 118 | 55_exporte russland_ukraine kornkammer europas_exporte russland ukraine_grund rekordhohen erdgaspreise | | 56 | gesellschaft leidet feigheit - warum gesellschaft tief - mainstream medien fasst - medien fasst fuellmich - einflussreichen medienkonzernen öffentlich | 118 | 56_gesellschaft leidet feigheit_warum gesellschaft tief_mainstream medien fasst_medien fasst fuellmich | | 57 | aufruf friedlichen demonstranten - friedlichen demonstranten trefft - demonstranten trefft - aufruf friedlichen - demonstranten trefft 11 | 113 | 57_aufruf friedlichen demonstranten_friedlichen demonstranten trefft_demonstranten trefft_aufruf friedlichen | | 58 | ungeimpfte klimakrisen verleugner - ungeimpfte klimakrisen - klimanotstand auszurufen - menschengemachten klimawandel - klimapolitik | 111 | 58_ungeimpfte klimakrisen verleugner_ungeimpfte klimakrisen_klimanotstand auszurufen_menschengemachten klimawandel | | 59 | com geraldgrosz youtube - webshop gerald - twitter com geraldgrosz - instagram com geraldgrosz - geraldgrosz youtube | 108 | 59_com geraldgrosz youtube_webshop gerald_twitter com geraldgrosz_instagram com geraldgrosz | | 60 | polizei geht zumindest - polizei handelt demnach - polizei geht - polizei handelt - chemnitzer polizei geht | 108 | 60_polizei geht zumindest_polizei handelt demnach_polizei geht_polizei handelt | | 61 | krieg ukraine usa - krieg verhindern rät - krieg verhindert wurde - krieg verhindern - krieg rückt nato | 107 | 61_krieg ukraine usa_krieg verhindern rät_krieg verhindert wurde_krieg verhindern | | 62 | impflicht unterstützen mediziner - evidenzbasierten medizin - kritische mediziner ärztekammer - kritische mediziner - mediziner innen wissenschafter | 106 | 62_impflicht unterstützen mediziner_evidenzbasierten medizin_kritische mediziner ärztekammer_kritische mediziner | | 63 | video devastated germany - devastated germany - end germany - devastated germany end - germany end | 105 | 63_video devastated germany_devastated germany_end germany_devastated germany end | | 64 | elektroautos deutschlands - folgen steuereinnahmen diesel - elektromobilität - kauf elektroautos - 000 euro auto | 104 | 64_elektroautos deutschlands_folgen steuereinnahmen diesel_elektromobilität_kauf elektroautos | | 65 | leopard panzer ukraine - leopard panzern ukraine - kampfpanzer ukraine - lieferung leopard panzern - panzern ukraine ab | 104 | 65_leopard panzer ukraine_leopard panzern ukraine_kampfpanzer ukraine_lieferung leopard panzern | | 66 | ufo sichtungen - ufo follow us - wären außerirdische - ufo follow - außerirdischen | 101 | 66_ufo sichtungen_ufo follow us_wären außerirdische_ufo follow | | 67 | zeigt epstein maxwell - epstein maxwell - anwälte ghislaine maxwell - maxwell dafür gekämpft - sagte maxwell | 100 | 67_zeigt epstein maxwell_epstein maxwell_anwälte ghislaine maxwell_maxwell dafür gekämpft | | 68 | anfänger danke q74you - krieg nesara gesara - q74you - plan verstehen kommst - danke q74you | 98 | 68_anfänger danke q74you_krieg nesara gesara_q74you_plan verstehen kommst | | 69 | ja 18 00 - 17 02 uhr - januar 22 ernst - ab 18 00 - februar 00 03 | 98 | 69_ja 18 00_17 02 uhr_januar 22 ernst_ab 18 00 | | 70 | spritpreise zeiten heizölkosten - gaspreise - gas ölpreise - tankstellen steigen preise - rohölpreise | 95 | 70_spritpreise zeiten heizölkosten_gaspreise_gas ölpreise_tankstellen steigen preise | | 71 | ukrainische präsident wolodymyr - brutale medienstrategie ukraine - zwingen ukraine helfen - präsident wolodymyr selenskyj - endemische korruption ukraine | 95 | 71_ukrainische präsident wolodymyr_brutale medienstrategie ukraine_zwingen ukraine helfen_präsident wolodymyr selenskyj | | 72 | protestzug impfpflicht - protest allgemeine impfpflicht - berlin protestzug impfpflicht - protesten impfpflicht - protest impfpflicht | 94 | 72_protestzug impfpflicht_protest allgemeine impfpflicht_berlin protestzug impfpflicht_protesten impfpflicht | | 73 | zeugen coronas religiöse - ersatzreligion erhalten - ersatzreligion - ersatzreligion erhalten bleibt - coronas religiöse | 94 | 73_zeugen coronas religiöse_ersatzreligion erhalten_ersatzreligion_ersatzreligion erhalten bleibt | | 74 | ungarische polizei - linksterroristen - handelt mutmaßlichen angreifern - schläger trupps budapest - polizei sucht deutsche | 93 | 74_ungarische polizei_linksterroristen_handelt mutmaßlichen angreifern_schläger trupps budapest | | 75 | russland gas liefert - hoffe russland gas - stärker deutschland dient - je stärker deutschland - deutschland dient umso | 93 | 75_russland gas liefert_hoffe russland gas_stärker deutschland dient_je stärker deutschland | | 76 | young 17 pfizer - mike adams 19 - 20 dr andrew - andrew kaufman 20 - dr andrew kaufman | 92 | 76_young 17 pfizer_mike adams 19_20 dr andrew_andrew kaufman 20 | | 77 | waldgrenze groß warnstufe - lagen erheblich warnstufe - oberhalb waldgrenze - erheblich warnstufe - groß warnstufe | 91 | 77_waldgrenze groß warnstufe_lagen erheblich warnstufe_oberhalb waldgrenze_erheblich warnstufe | | 78 | frankfurter flughafen - flughafen frankfurt - frankfurt - gange situation bahnhof - flughafen | 91 | 78_frankfurter flughafen_flughafen frankfurt_frankfurt_gange situation bahnhof | | 79 | qualifizierte rechtskräftigen freispruch - rechtskräftigen freispruch - amtsanmaßung angeklagte univ - betruges amtsanmaßung angeklagte - rechtskräftigen freispruch punkten | 90 | 79_qualifizierte rechtskräftigen freispruch_rechtskräftigen freispruch_amtsanmaßung angeklagte univ_betruges amtsanmaßung angeklagte | | 80 | stromnetz hamburg - beleuchtung landebahnen - kabelfehler - trafostationen - betroffen erzgebirge morgen | 90 | 80_stromnetz hamburg_beleuchtung landebahnen_kabelfehler_trafostationen | | 81 | wirtschaftskrisen - co beispiellosen teuerungswelle - menschen neue armut - öko kommunismus heizen - neue armut reichtum | 87 | 81_wirtschaftskrisen_co beispiellosen teuerungswelle_menschen neue armut_öko kommunismus heizen | | 82 | 99 prozent elektrogeräte - stundedie ladegeschwindigkeit river - ladegeschwindigkeit river - ladegeschwindigkeit river übertrifft - stundedie ladegeschwindigkeit | 87 | 82_99 prozent elektrogeräte_stundedie ladegeschwindigkeit river_ladegeschwindigkeit river_ladegeschwindigkeit river übertrifft | | 83 | grünen staatsverweigerer demokratieverweigerer - grünen politiker - extremisten sollen grüne - grünen umfragen entlarven - sollen grüne parteizentrale | 85 | 83_grünen staatsverweigerer demokratieverweigerer_grünen politiker_extremisten sollen grüne_grünen umfragen entlarven | | 84 | greetings germany - personal greetings germany - greetings germany go - patriots world ukraine - germany go patriots | 83 | 84_greetings germany_personal greetings germany_greetings germany go_patriots world ukraine | | 85 | internationalen gesundheitsvorschriften - abkommen ermöglichen ländern - internationalen pandemievertrag - globalen pandemievertrag - ländern beliebige maßnahmen | 82 | 85_internationalen gesundheitsvorschriften_abkommen ermöglichen ländern_internationalen pandemievertrag_globalen pandemievertrag | | 86 | fermentierung löwenzahn extrakt - zusätzlich verstärkt fermentierungsprozess - zugabe milchsäurebakterien anschließende - verstärkt fermentierungsprozess macht - milchsäurebakterien anschließende fermentierung | 81 | 86_fermentierung löwenzahn extrakt_zusätzlich verstärkt fermentierungsprozess_zugabe milchsäurebakterien anschließende_verstärkt fermentierungsprozess macht | | 87 | schlafen helfen natürliche - kombination kennen schlaf - schlaf erholung bekommen - ausreichend schlaf erholung - schlaf erholung | 81 | 87_schlafen helfen natürliche_kombination kennen schlaf_schlaf erholung bekommen_ausreichend schlaf erholung | | 88 | buch ethik - buch kult viralität - buch ethik impfens - bestellen buch kult - buch kult | 81 | 88_buch ethik_buch kult viralität_buch ethik impfens_bestellen buch kult | | 89 | kryptowährungen zukunft finanzsystems - sparkassen planen bitcoin - handel digitalwährungen bitcoin - digitalwährungen bitcoin - digitalwährungen bitcoin ethereum | 80 | 89_kryptowährungen zukunft finanzsystems_sparkassen planen bitcoin_handel digitalwährungen bitcoin_digitalwährungen bitcoin | | 90 | maske tragen kinder - maskengeboten kinder jugendliche - kindern jährige maske - maskengeboten kinder - masken schulen | 80 | 90_maske tragen kinder_maskengeboten kinder jugendliche_kindern jährige maske_maskengeboten kinder | | 91 | mobilfunkkritiker dr - mobilfunkkritiker - erhöhtes krebsrisiko - 2004 hinsichtlich krebserkrankungen - hinsichtlich krebserkrankungen | 80 | 91_mobilfunkkritiker dr_mobilfunkkritiker_erhöhtes krebsrisiko_2004 hinsichtlich krebserkrankungen | | 92 | folgen weiterlesen - anmerkungen aussagen - folgen weiterlesen teilen - teilen folgen weiterlesen - bestimmt mehr querdenken | 79 | 92_folgen weiterlesen_anmerkungen aussagen_folgen weiterlesen teilen_teilen folgen weiterlesen | | 93 | falscher alarm ukraine - tschernobyl saporischschja erobert - gefahr darzustellen tschernobyl - kernkraftwerk tschernobyl - atomkraftwerk tschernobyl | 79 | 93_falscher alarm ukraine_tschernobyl saporischschja erobert_gefahr darzustellen tschernobyl_kernkraftwerk tschernobyl | | 94 | menschliche erfahrungen zeigen - geistigen fortschritt bietet - typisch menschliche erfahrungen - erfahrungen zeigen stärken - bietet leser möglichkeit | 78 | 94_menschliche erfahrungen zeigen_geistigen fortschritt bietet_typisch menschliche erfahrungen_erfahrungen zeigen stärken | | 95 | ermitteln sicherheitsbericht bringen - untererfassung bkk versicherten - verdachtsfälle nebenwirkung - bkk versicherten errechnen - versicherten errechnen hochgerechnet | 78 | 95_ermitteln sicherheitsbericht bringen_untererfassung bkk versicherten_verdachtsfälle nebenwirkung_bkk versicherten errechnen | | 96 | fallschirmjäger survival experten - sämtliche hilfsmittel urlaubs - hilfsmittel urlaubs outdoor - aktivitäten art hervorragend - ehemaligen fallschirmjäger | 78 | 96_fallschirmjäger survival experten_sämtliche hilfsmittel urlaubs_hilfsmittel urlaubs outdoor_aktivitäten art hervorragend | | 97 | gender ideologie - wörter männlich weiblich - feminismus - männlich weiblich - ja implizieren männer | 78 | 97_gender ideologie_wörter männlich weiblich_feminismus_männlich weiblich | | 98 | krieg deutsche wirtschaft - deutsche exportüberschuss - folgen kriegs ukraine - deutsche exportüberschuss jahr - welthäfen ukraine russland | 77 | 98_krieg deutsche wirtschaft_deutsche exportüberschuss_folgen kriegs ukraine_deutsche exportüberschuss jahr | | 99 | ziele krieges ukraine - krieges ukraine zerstückelung - krieges ukraine - ukraine zerstückelung ruin - ukraine zerstückelung | 77 | 99_ziele krieges ukraine_krieges ukraine zerstückelung_krieges ukraine_ukraine zerstückelung ruin | | 100 | spionageein schutz - spionage sollten handelsübliches - ortung schutz privatsphäre - spionage sollten - verhindert handyhülle schützt | 76 | 100_spionageein schutz_spionage sollten handelsübliches_ortung schutz privatsphäre_spionage sollten | | 101 | österreich menschen freiheit - österreich stehen demokratie - freien meinung österreich - grundrechte wien niederösterreich - politische neubeginn österreich | 76 | 101_österreich menschen freiheit_österreich stehen demokratie_freien meinung österreich_grundrechte wien niederösterreich | | 102 | zahlungsausfall russlands - zahlungsausfall russlands große - finanzinstitutionen russland - finanzinstitutionen russland aktiv - russischen zentralbank | 76 | 102_zahlungsausfall russlands_zahlungsausfall russlands große_finanzinstitutionen russland_finanzinstitutionen russland aktiv | | 103 | oberverwaltungsgericht lüneburg 2g - westfalen klage 2g - klage 2g regelung - 2g regel einzelhandel - 2g regelung einzelhandel | 76 | 103_oberverwaltungsgericht lüneburg 2g_westfalen klage 2g_klage 2g regelung_2g regel einzelhandel | | 104 | current volcanic eruption - aktuellen vulkanausbruch insel - vulkanausbruch insel la - aktuellen vulkanausbruch - current volcanic | 76 | 104_current volcanic eruption_aktuellen vulkanausbruch insel_vulkanausbruch insel la_aktuellen vulkanausbruch | | 105 | wegen angeblichen leaks - angeblichen leaks zuschauern - neuesten skandal geht - verdacht befangenheit gerichts - angeblichen leaks | 76 | 105_wegen angeblichen leaks_angeblichen leaks zuschauern_neuesten skandal geht_verdacht befangenheit gerichts | | 106 | staates generelle impfpflicht - generelle impfpflicht einzuführen - impfpflicht einzuführen angesichts - staat verordnete impfpflicht - verordnete impfpflicht verantworten | 75 | 106_staates generelle impfpflicht_generelle impfpflicht einzuführen_impfpflicht einzuführen angesichts_staat verordnete impfpflicht | | 107 | führten massiven lerndefiziten - schulschließungen führten massiven - massiven lerndefiziten - massiven lerndefiziten meta - schulschließungen führten | 75 | 107_führten massiven lerndefiziten_schulschließungen führten massiven_massiven lerndefiziten_massiven lerndefiziten meta | | 108 | klimapolitik energiewende atomkraft - energiewende atomkraft warnen - mehrwert atomkraft klimaneutralität - atomkraft klimaneutralität - gestritten erdgas atomkraft | 75 | 108_klimapolitik energiewende atomkraft_energiewende atomkraft warnen_mehrwert atomkraft klimaneutralität_atomkraft klimaneutralität | | 109 | zensur zugreifen safari - brasilien februar wwg1wga - zensur zugreifen - regierungsgebäude brasilien februar - danke q74you apple | 75 | 109_zensur zugreifen safari_brasilien februar wwg1wga_zensur zugreifen_regierungsgebäude brasilien februar | | 110 | sieg norweger - viertelfinal - topspiel bundesliga empfängt - topspiel bundesliga - triumphierte | 74 | 110_sieg norweger_viertelfinal_topspiel bundesliga empfängt_topspiel bundesliga | | 111 | bombardierung dresdens - bombenangriff dresden stellt - gedenken opfer bombenterrors - 1945 gedenken opfer - 1945 gedenken | 73 | 111_bombardierung dresdens_bombenangriff dresden stellt_gedenken opfer bombenterrors_1945 gedenken opfer | | 112 | 19 infektion gesunden - infektion gesunden kindern - todesfällen covid 19 - covid impfstoffen vermerkt - 000 todesfälle zeitlicher | 73 | 112_19 infektion gesunden_infektion gesunden kindern_todesfällen covid 19_covid impfstoffen vermerkt | | 113 | impfstoffe produktlisten biotech - impfstoffe produktlisten - mrna impfstoffe produktlisten - covid mrna impfprogramme - bestandteile mrna impfstoffe | 73 | 113_impfstoffe produktlisten biotech_impfstoffe produktlisten_mrna impfstoffe produktlisten_covid mrna impfprogramme | | 114 | veranstaltung deutschsprachigen gemeinschaft - veranstaltung deutschsprachigen - märz veranstaltung deutschsprachigen - eingeladen märz veranstaltung - deutschsprachigen gemeinschaft balaton | 73 | 114_veranstaltung deutschsprachigen gemeinschaft_veranstaltung deutschsprachigen_märz veranstaltung deutschsprachigen_eingeladen märz veranstaltung | | 115 | abonnieren freie medien - unabhängige medienarbeit unterstützen - medien freiwilligen zuwendung - freie medien kanal - medien freiwilligen | 72 | 115_abonnieren freie medien_unabhängige medienarbeit unterstützen_medien freiwilligen zuwendung_freie medien kanal | | 116 | allgemeinen impfpflicht abstimmen - impfpflicht bundesrepublik abstimmen - einführung impfpflicht bundesrepublik - bundestagsabgeordneten allgemeine impfpflicht - impfpflicht abstimmen | 72 | 116_allgemeinen impfpflicht abstimmen_impfpflicht bundesrepublik abstimmen_einführung impfpflicht bundesrepublik_bundestagsabgeordneten allgemeine impfpflicht | | 117 | präsidenten wladimir putin - sagte ukrainischen präsidenten - bennett ukrainischen präsidenten - präsidenten wladimir - angebot russischen präsidenten | 72 | 117_präsidenten wladimir putin_sagte ukrainischen präsidenten_bennett ukrainischen präsidenten_präsidenten wladimir | | 118 | neuerscheinung erfolgreichen bestsellerreihe - bestsellerliste zurück buch - erfolgreichen bestsellerreihe - bestsellerliste zurück - buch abstieg | 71 | 118_neuerscheinung erfolgreichen bestsellerreihe_bestsellerliste zurück buch_erfolgreichen bestsellerreihe_bestsellerliste zurück | | 119 | video devastated germany - devastated germany end - end germany - devastated germany - germany end | 71 | 119_video devastated germany_devastated germany end_end germany_devastated germany | | 120 | china örtlichen behörden - china örtlichen - chinesen - china feuchte nocovid - china 06 03 | 71 | 120_china örtlichen behörden_china örtlichen_chinesen_china feuchte nocovid | | 121 | effiziente stromerzeugung 20 - notstromaggregat stromversorgung - tragbares notstromaggregat stromversorgung - stromversorgung stromausfällen effiziente - effiziente stromerzeugung | 68 | 121_effiziente stromerzeugung 20_notstromaggregat stromversorgung_tragbares notstromaggregat stromversorgung_stromversorgung stromausfällen effiziente | | 122 | abnehmbare hüfttasche diversen - hüfttasche diversen reißverschlusstaschen - reißverschlusstaschen hüfttasche umhängetasche - camping defense pack - hüfttasche umhängetasche genutzt | 67 | 122_abnehmbare hüfttasche diversen_hüfttasche diversen reißverschlusstaschen_reißverschlusstaschen hüfttasche umhängetasche_camping defense pack | | 123 | überall immer frieden - immer frieden - će svijetu donijeti - ljubav će svijetu - vjesnico mira | 67 | 123_überall immer frieden_immer frieden_će svijetu donijeti_ljubav će svijetu | | 124 | get dr jane - episode dr jane - show dr jane - follow dr jane - ask dr jane | 67 | 124_get dr jane_episode dr jane_show dr jane_follow dr jane | | 125 | speziellen lebensmittel bereits - militär speziellen lebensmittel - bestens bewährt somit - speziellen lebensmittel - bestens bewährt | 67 | 125_speziellen lebensmittel bereits_militär speziellen lebensmittel_bestens bewährt somit_speziellen lebensmittel | | 126 | millionen satanisten gute - satanisten gute ländern - millionen satanisten - satanische seelenlose leben - satanisten gute | 66 | 126_millionen satanisten gute_satanisten gute ländern_millionen satanisten_satanische seelenlose leben | | 127 | nachweis krankmachendes virus - krankmachendes virus erbringen - virus vorlagen - krankmachendes virus - virus erbringen | 66 | 127_nachweis krankmachendes virus_krankmachendes virus erbringen_virus vorlagen_krankmachendes virus | | 128 | exklusive elektrosmog tester - elektrosmog tester cm - elektrosmog tester - exklusive elektrosmog - neue exklusive elektrosmog | 66 | 128_exklusive elektrosmog tester_elektrosmog tester cm_elektrosmog tester_exklusive elektrosmog | | 129 | 2023 herraber - 12 2021 leibnitz - 12 2021 lübeck - 12 2021 kempten - 18 12 2021 | 66 | 129_2023 herraber_12 2021 leibnitz_12 2021 lübeck_12 2021 kempten | | 130 | allgemeinarzt ehem vorstand - dr john ionescu - evidenzbasierte medizin dnebm - medizin allgemeinarzt ehem - ici vizeparteiobmann | 66 | 130_allgemeinarzt ehem vorstand_dr john ionescu_evidenzbasierte medizin dnebm_medizin allgemeinarzt ehem | | 131 | uhr volksfestplatz nürnberg - 00 uhr stadtplatz - uhr stadtplatz - volksfestplatz nürnberg einigkeit - 30 uhr volksfestplatz | 66 | 131_uhr volksfestplatz nürnberg_00 uhr stadtplatz_uhr stadtplatz_volksfestplatz nürnberg einigkeit | | 132 | unruhe sorgen widerstand - staatsfunk geht demonstrationen - geht demonstrationen - geht demonstrationen bildgebendes - sprechchöre reden niederschwelligen | 66 | 132_unruhe sorgen widerstand_staatsfunk geht demonstrationen_geht demonstrationen_geht demonstrationen bildgebendes | | 133 | apolut app steht - apolut kostenlose app - link apolut app - apolut app - apk app unserer | 65 | 133_apolut app steht_apolut kostenlose app_link apolut app_apolut app | | 134 | steuer deftige - co2 steuer deftige - verschiebung co2 steuer - steuern bundesregierung - quellensteuer | 65 | 134_steuer deftige_co2 steuer deftige_verschiebung co2 steuer_steuern bundesregierung | | 135 | gärtnern entschleunigung bodenbewusstsein - auf1 beschäftigen gärtnern - beschäftigen gärtnern entschleunigung - bodenbewusstsein bioqualität - beschäftigen gärtnern | 65 | 135_gärtnern entschleunigung bodenbewusstsein_auf1 beschäftigen gärtnern_beschäftigen gärtnern entschleunigung_bodenbewusstsein bioqualität | | 136 | sozialismus ungerechtigkeit welt - glauben sozialismus ungerechtigkeit - kommunismus - sozialismus ungerechtigkeit - sozialismus kommunismus faschismus | 65 | 136_sozialismus ungerechtigkeit welt_glauben sozialismus ungerechtigkeit_kommunismus_sozialismus ungerechtigkeit | | 137 | russland ländern bedrohten - jeglicher angriff russlands - warnung russland - verlassen dreht russland - russischen landesinneren bekommt | 64 | 137_russland ländern bedrohten_jeglicher angriff russlands_warnung russland_verlassen dreht russland | | 138 | gold ultimativen währung - inflation gold euro - gold inflation gold - hoch gold inflation - gold inflation | 64 | 138_gold ultimativen währung_inflation gold euro_gold inflation gold_hoch gold inflation | | 139 | report naturalnews videos - naturalnews videos - freedom medical freedom - medical freedom freedom - naturalnews videos would | 64 | 139_report naturalnews videos_naturalnews videos_freedom medical freedom_medical freedom freedom | | 140 | rothschild dynastie - familiendynastien fast gesamte - familiendynastien fast - mächtigste familie erde - lord rothschild mitgliedern | 64 | 140_rothschild dynastie_familiendynastien fast gesamte_familiendynastien fast_mächtigste familie erde | | 141 | mühe aussagen herr - herr quade wertlosen - frage herr quade - herr quade mal - herr quade ja | 64 | 141_mühe aussagen herr_herr quade wertlosen_frage herr quade_herr quade mal | | 142 | immunität südafrikanische ärztin - immunität südafrikanische - natürliche immunität südafrikanische - zeigte medizinerin überrascht - natürliche immunität | 63 | 142_immunität südafrikanische ärztin_immunität südafrikanische_natürliche immunität südafrikanische_zeigte medizinerin überrascht | | 143 | dutzende tote tornados - tödlichste tornado - tödlichste tornado jemals - mehr 30 tornados - tornados kentucky | 63 | 143_dutzende tote tornados_tödlichste tornado_tödlichste tornado jemals_mehr 30 tornados | | 144 | impfschutz rapide abnimmt - booster impfungen geimpften - verhinderung ausbreitung coronavirus - impfung verhinderung ausbreitung - impfungen geimpften eingeführt | 63 | 144_impfschutz rapide abnimmt_booster impfungen geimpften_verhinderung ausbreitung coronavirus_impfung verhinderung ausbreitung | | 145 | überraschend gestorben - gestorben gab ehemaliger - gestorben - gestorben gab - unerwarteter tod | 63 | 145_überraschend gestorben_gestorben gab ehemaliger_gestorben_gestorben gab | | 146 | crailsheim - 200 teilnehmer - personen woche ca - sangerhausen sachsen anhalt - 45 spaziergänger | 62 | 146_crailsheim_200 teilnehmer_personen woche ca_sangerhausen sachsen anhalt | | 147 | orf beschwerde verharmlosung - vorwurf einseitige berichterstattung - beschwerde verharmlosung gefährlicher - orf beschwerde - schweren nebenwirkungen impfkampagne | 62 | 147_orf beschwerde verharmlosung_vorwurf einseitige berichterstattung_beschwerde verharmlosung gefährlicher_orf beschwerde | | 148 | übersterblichkeit deutschland - momentan sterben deutschland - sonderauswertung vorläufigen sterbefallzahlen - menschen gestorben zahl - gestorben zahl liegt | 62 | 148_übersterblichkeit deutschland_momentan sterben deutschland_sonderauswertung vorläufigen sterbefallzahlen_menschen gestorben zahl | | 149 | afd häutung partei - gewisse politisch neutrale - demokratie afd abgeordnete - anschlag demokratie afd - abgeordnete eigentlich | 62 | 149_afd häutung partei_gewisse politisch neutrale_demokratie afd abgeordnete_anschlag demokratie afd | | 150 | video deutschlandkurier - neues video deutschlandkurier - videobotschaft aufruf berlin - video wien österreich - video kehl | 62 | 150_video deutschlandkurier_neues video deutschlandkurier_videobotschaft aufruf berlin_video wien österreich | | 151 | lesungen ab euro - gunnar kaiser youtube - ab euro monat - kaiser youtube gunnar - kaiser youtube | 61 | 151_lesungen ab euro_gunnar kaiser youtube_ab euro monat_kaiser youtube gunnar | | 152 | bitte teilen telegram - telegram martin rutter - telegram beim newsletter - teilen telegram folgen - folge telegram beim | 61 | 152_bitte teilen telegram_telegram martin rutter_telegram beim newsletter_teilen telegram folgen | | 153 | meiningen gestern - meiningen gestern strasse - com compactmagazin tiktok - meissen heute - isa commerzbank meißen | 61 | 153_meiningen gestern_meiningen gestern strasse_com compactmagazin tiktok_meissen heute | | 154 | mega demo freiheit - absicht österreichischer grundrechtsaktivist - mega demo wien - demo freiheit rede - demo freiheit | 61 | 154_mega demo freiheit_absicht österreichischer grundrechtsaktivist_mega demo wien_demo freiheit rede | | 155 | maßnahmen entlastung einkommensschwachen - anpassung erhöhung wohnbeihilfe - erhöhung wohnbeihilfe - erhöhung wohnbeihilfe gemäß - entlastung einkommensschwachen | 61 | 155_maßnahmen entlastung einkommensschwachen_anpassung erhöhung wohnbeihilfe_erhöhung wohnbeihilfe_erhöhung wohnbeihilfe gemäß | | 156 | wochenprogramm kommende woche - auf1 wochenprogramm kommende - kommende woche vielfältiges - woche vielfältiges wochenprogramm - start woche | 61 | 156_wochenprogramm kommende woche_auf1 wochenprogramm kommende_kommende woche vielfältiges_woche vielfältiges wochenprogramm | | 157 | gaspreise europa - gaspreise - gaspreis rekordhoch - nowak europa verbraucht - gas pro jahr | 61 | 157_gaspreise europa_gaspreise_gaspreis rekordhoch_nowak europa verbraucht | | 158 | friedensbewegung - eigenen herzen friedensbewegung - macht frieden versammeln - richtung frieden mehr - macht frieden | 60 | 158_friedensbewegung_eigenen herzen friedensbewegung_macht frieden versammeln_richtung frieden mehr | | 159 | ukrainische nazis säubern - 2014 ukrainische nazis - nazis ukraine handelt - wortlaut nazis ukraine - ukrainische nazis | 60 | 159_ukrainische nazis säubern_2014 ukrainische nazis_nazis ukraine handelt_wortlaut nazis ukraine | | 160 | gut tragen großstadtdschungel - großstadtdschungel erobern sämtliche - tragen großstadtdschungel - großstadtdschungel erobern - tragen großstadtdschungel erobern | 60 | 160_gut tragen großstadtdschungel_großstadtdschungel erobern sämtliche_tragen großstadtdschungel_großstadtdschungel erobern | | 161 | kamin aufsteigen wasser - wasser kochen gebracht - erhitzt folge kamineffektes - befindet wasser kochen - wasser kochen | 59 | 161_kamin aufsteigen wasser_wasser kochen gebracht_erhitzt folge kamineffektes_befindet wasser kochen | | 162 | kimmichs corona erkrankung - corona erkrankung bedroht - ansteckt bzw grippe - bzw grippe bekommt - immunität bräuchte erst | 58 | 162_kimmichs corona erkrankung_corona erkrankung bedroht_ansteckt bzw grippe_bzw grippe bekommt | | 163 | gerade panik narrativ - wovor hast angst - menschen gerade panik - panik narrativ - hirnforschers menschheitsgeschichte angst | 58 | 163_gerade panik narrativ_wovor hast angst_menschen gerade panik_panik narrativ | | 164 | wegen aktuellen zensurwelle - zensurwelle sozialen medien - folgenden zensurfreien - aktuellen zensurwelle - aktuellen zensurwelle sozialen | 58 | 164_wegen aktuellen zensurwelle_zensurwelle sozialen medien_folgenden zensurfreien_aktuellen zensurwelle | | 165 | impfpflicht deutschland - fall impfpflicht einfache - vielleicht fall impfpflicht - impfpflicht einfache verlockende - fall impfpflicht | 57 | 165_impfpflicht deutschland_fall impfpflicht einfache_vielleicht fall impfpflicht_impfpflicht einfache verlockende | | 166 | us luftwaffenstützpunkt ramstein - us luftwaffenstützpunkt - schritt polnische außenministerium - polnische außenministerium - polnische außenministerium dienstagabend | 57 | 166_us luftwaffenstützpunkt ramstein_us luftwaffenstützpunkt_schritt polnische außenministerium_polnische außenministerium | | 167 | rücktritt gesundheitsminister mückstein - heutige rücktritt gesundheitsminister - rücktritt gesundheitsministers gibt - rücktritt gesundheitsminister - mehr rücktritt gesundheitsministers | 57 | 167_rücktritt gesundheitsminister mückstein_heutige rücktritt gesundheitsminister_rücktritt gesundheitsministers gibt_rücktritt gesundheitsminister | | 168 | schwächen immunsystem - immunsystem prozesse krankheiten - prozesse krankheiten führen - schnell chronisch schwächen - prozesse krankheiten | 57 | 168_schwächen immunsystem_immunsystem prozesse krankheiten_prozesse krankheiten führen_schnell chronisch schwächen | | 169 | gebucht klimaschützer auseinanderhalten - gebucht klimaschützer - privatleute gebucht klimaschützer - klimaschützer auseinanderhalten - zwei klimaaktivisten | 56 | 169_gebucht klimaschützer auseinanderhalten_gebucht klimaschützer_privatleute gebucht klimaschützer_klimaschützer auseinanderhalten | | 170 | unserer digitalen identität - digitale id - digital id - podatke ima - cetral bank digital | 56 | 170_unserer digitalen identität_digitale id_digital id_podatke ima | | 171 | kündigung wasservertrags tesla - kündigung wasservertrags - wasserförderung - wasservertrags - wasserwerk | 56 | 171_kündigung wasservertrags tesla_kündigung wasservertrags_wasserförderung_wasservertrags | | 172 | ziele krieges ukraine - krieges ukraine zerstückelung - krieges ukraine - ukraine zerstückelung ruin - ukraine zerstückelung | 56 | 172_ziele krieges ukraine_krieges ukraine zerstückelung_krieges ukraine_ukraine zerstückelung ruin | | 173 | kanal abonnieren bitte - gerne kanal abonnieren - ausprobieren kanal abonnieren - abonnieren kanal - abonniert kanal | 56 | 173_kanal abonnieren bitte_gerne kanal abonnieren_ausprobieren kanal abonnieren_abonnieren kanal | | 174 | leben führen mutter - mutter eben leben - liebe vernunft mama - mama danke gezeigt - jahre alt mama | 55 | 174_leben führen mutter_mutter eben leben_liebe vernunft mama_mama danke gezeigt | | 175 | gilt politischen kritik - darf wissenschaftler - darf wissenschaftler vorschläge - geworden statt wissenschaftliche - betracht gezogen wissenschaftliche | 55 | 175_gilt politischen kritik_darf wissenschaftler_darf wissenschaftler vorschläge_geworden statt wissenschaftliche | | 176 | demonstration freitag 11 - demonstration freitag - kundgebung demonstration freitag - lauten aufhebung impfpflicht - demonstration | 55 | 176_demonstration freitag 11_demonstration freitag_kundgebung demonstration freitag_lauten aufhebung impfpflicht | | 177 | rechtsextremismus vorwurf gelassenheit - wirkt rechtsextremismus politischer - rechtsextremismus vorwurf - wirkt rechtsextremismus - rechtsextremismus politischer | 55 | 177_rechtsextremismus vorwurf gelassenheit_wirkt rechtsextremismus politischer_rechtsextremismus vorwurf_wirkt rechtsextremismus | | 178 | medienarbeit paypal bankverbindung - paypal bankverbindung lautet - paypal bankverbindung - unabhängige medienarbeit paypal - medienarbeit paypal | 55 | 178_medienarbeit paypal bankverbindung_paypal bankverbindung lautet_paypal bankverbindung_unabhängige medienarbeit paypal | | 179 | somit fast regierungschef - deutschlands eigene - system deutschlands eigene - befehlsgeber logen rothschilds - weltweit irgendeiner loge | 55 | 179_somit fast regierungschef_deutschlands eigene_system deutschlands eigene_befehlsgeber logen rothschilds | | 180 | beneder freiheit - freiheit freiheit quelle - beschrieben freiheit - freiheit quelle - freiheit mfg freiheit | 55 | 180_beneder freiheit_freiheit freiheit quelle_beschrieben freiheit_freiheit quelle | | 181 | 02 2023 stnewslive - gera 06 02 - 02 2023 naumburg - köthen 06 02 - 06 02 2023 | 55 | 181_02 2023 stnewslive_gera 06 02_02 2023 naumburg_köthen 06 02 | | 182 | verwaltungsgerichtshof vgh - dürfen ungeimpfte studenten - 2g regel hochschulen - regel hochschulen - verwaltungsgerichtshof | 54 | 182_verwaltungsgerichtshof vgh_dürfen ungeimpfte studenten_2g regel hochschulen_regel hochschulen | | 183 | hauptmahlzeiten gefriergetrocknet wiederverschließbaren - tage hauptmahlzeiten gefriergetrocknet - hauptmahlzeiten gefriergetrocknet - langzeitlebensmittel tage notrationnimmt - mindesthaltbarkeit langzeitlebensmittel tage | 54 | 183_hauptmahlzeiten gefriergetrocknet wiederverschließbaren_tage hauptmahlzeiten gefriergetrocknet_hauptmahlzeiten gefriergetrocknet_langzeitlebensmittel tage notrationnimmt | | 184 | erneut österreichweite proteste - österreichweite proteste - österreichweite proteste landesregierungen - zuversichtlich tag protesttag - dezember erneut österreichweite | 54 | 184_erneut österreichweite proteste_österreichweite proteste_österreichweite proteste landesregierungen_zuversichtlich tag protesttag | | 185 | angst depressionen ersten - anstieg angst depressionen - depressionen ersten corona - angststörungen depressionen - angst depressionen | 54 | 185_angst depressionen ersten_anstieg angst depressionen_depressionen ersten corona_angststörungen depressionen | | 186 | vitalstoffen wesentlichen vitaminen - wesentlichen vitaminen - 32 vitalstoffen wesentlichen - vitaminen - vitamine | 54 | 186_vitalstoffen wesentlichen vitaminen_wesentlichen vitaminen_32 vitalstoffen wesentlichen_vitaminen | | 187 | aktionstages schutz österreichischen - österreichischen bundesverfassung auszug - diesmal oberösterreich - freedom austria gruppen - österreichischen bundesverfassung | 54 | 187_aktionstages schutz österreichischen_österreichischen bundesverfassung auszug_diesmal oberösterreich_freedom austria gruppen | | 188 | code checkout realnews - code checkout - use code checkout - checkout realnews crypto - addr1v94ayqu53uklgqnn6c4x4 use code | 53 | 188_code checkout realnews_code checkout_use code checkout_checkout realnews crypto | | 189 | hamburg heute strasse - düsseldorf heute strasse - folgt frankfurt - hamburg heute - folgt hamburg heute | 53 | 189_hamburg heute strasse_düsseldorf heute strasse_folgt frankfurt_hamburg heute | | 190 | unterstützung pcr tests - cdc pcr tests - pcr tests nachweis - pcr tests ende - pcr tests | 52 | 190_unterstützung pcr tests_cdc pcr tests_pcr tests nachweis_pcr tests ende | | 191 | pandemie finanzkräftige elite - finanzkräftige elite stecken - elite stecken leute - radical traditionalist catholics - chance agenda durchzusetzen | 52 | 191_pandemie finanzkräftige elite_finanzkräftige elite stecken_elite stecken leute_radical traditionalist catholics | | 192 | auflage transhumanismus krieg - transhumanismus krieg - transhumanismus krieg menschheit - zweite auflage transhumanismus - bestrebung transhumanismus monatelanger | 52 | 192_auflage transhumanismus krieg_transhumanismus krieg_transhumanismus krieg menschheit_zweite auflage transhumanismus | | 193 | zudem nachweis spikeproteinen - impf spikeprotein offenbar - gelungen impf spikeprotein - virus vorhandene nukleokapsid - vorhandene nukleokapsid nachgewiesen | 52 | 193_zudem nachweis spikeproteinen_impf spikeprotein offenbar_gelungen impf spikeprotein_virus vorhandene nukleokapsid | | 194 | februar vorbei proteste - vorbei proteste februar - vorbei proteste - widerstand kanadier einschlagen - widerstand kanadier | 52 | 194_februar vorbei proteste_vorbei proteste februar_vorbei proteste_widerstand kanadier einschlagen | | 195 | arabische ausländische extremisten - islamisten - angehörige extremisten ukraine - militante islamisten - extremisten ukraine | 52 | 195_arabische ausländische extremisten_islamisten_angehörige extremisten ukraine_militante islamisten | | 196 | erfahren sinn lebens - bemühen reden verstehen - menschen fehlt sinn - verstehen erfahren sinn - reden verstehen erfahren | 52 | 196_erfahren sinn lebens_bemühen reden verstehen_menschen fehlt sinn_verstehen erfahren sinn | | 197 | langzeitlebensmittel krisenvorsorge essen - krisenvorsorge essen - leer krisenfall supermärkte - krisenfall supermärkte - krisenfall supermärkte binnen | 51 | 197_langzeitlebensmittel krisenvorsorge essen_krisenvorsorge essen_leer krisenfall supermärkte_krisenfall supermärkte | | 198 | meistverkaufte produkt beim - meistverkaufte produkt - viren gut geeignet - bakterien viren gut - moment meistverkaufte produkt | 51 | 198_meistverkaufte produkt beim_meistverkaufte produkt_viren gut geeignet_bakterien viren gut | | 199 | konvoi kurz washington - people convoy - 000 fahrzeugen erreicht - 000 fahrzeuge länge - konvoi kurz hauptstadt | 51 | 199_konvoi kurz washington_people convoy_000 fahrzeugen erreicht_000 fahrzeuge länge | | 200 | todesfall - todesursache mirco - daran sterben - sterben kommt ran - daran sterben kommt | 51 | 200_todesfall_todesursache mirco_daran sterben_sterben kommt ran | | 201 | regulierungspaket senkung gasverbrauchs - senkung gasverbrauchs - gasreduktion plan sehe - projektgruppe gasreduktion - seien zinsgünstige solarförderkredite | 51 | 201_regulierungspaket senkung gasverbrauchs_senkung gasverbrauchs_gasreduktion plan sehe_projektgruppe gasreduktion | | 202 | twitter elon musk - wurde musk entlassene - musk bleiben twitter - musk - stasi team musk | 51 | 202_twitter elon musk_wurde musk entlassene_musk bleiben twitter_musk | | 203 | klartext bürgerprotest offenem - spricht klartext bürgerprotest - klartext bürgerprotest - impfzwang unabhängig kritisch - hetzkampagne polit | 50 | 203_klartext bürgerprotest offenem_spricht klartext bürgerprotest_klartext bürgerprotest_impfzwang unabhängig kritisch | | 204 | staus containerschiffe - containerschiffe bereits - containerschiffe bereits routen - containerschiffe - staus containerschiffe tag | 50 | 204_staus containerschiffe_containerschiffe bereits_containerschiffe bereits routen_containerschiffe | | 205 | kostete diesel - kostet liter diesel - liter diesel weiterhin - liter benzin - euro mehr liter | 50 | 205_kostete diesel_kostet liter diesel_liter diesel weiterhin_liter benzin | | 206 | integration unserer kinder - hyperintelligente kinder müssen - kinder müssen mehr - kinder solidarische gesellschaft - unserer kinder solidarische | 50 | 206_integration unserer kinder_hyperintelligente kinder müssen_kinder müssen mehr_kinder solidarische gesellschaft | | 207 | orf totalreform - haushaltsabgabe geben vielmehr - finanzierung orf övp - orf totalreform unterzogen - finanzierung neu regeln | 50 | 207_orf totalreform_haushaltsabgabe geben vielmehr_finanzierung orf övp_orf totalreform unterzogen | | 208 | kaliningrad informationsportal - kaliningrad informationsportal entwicklung - domizil kaliningrad informationsportal - informationen föderalen russland - russischen region | 50 | 208_kaliningrad informationsportal_kaliningrad informationsportal entwicklung_domizil kaliningrad informationsportal_informationen föderalen russland | | 209 | trotz kandidatur innenministerin - faeser trotz kandidatur - kandidatur innenministerin bleiben - kandidatur innenministerin - faeser ministerpräsidentin hessen | 49 | 209_trotz kandidatur innenministerin_faeser trotz kandidatur_kandidatur innenministerin bleiben_kandidatur innenministerin | | 210 | youtube gunnar kaiser - 09 übersetzung quelle - 09 übersetzung - twitter merchandising podcast - youtube gunnar | 49 | 210_youtube gunnar kaiser_09 übersetzung quelle_09 übersetzung_twitter merchandising podcast | | 211 | brennstoff betrieben geeignet - gulaschkanone eintopfofen grill - art brennstoff betrieben - sofort mobile kochmöglichkeit - eintopfofen grill | 48 | 211_brennstoff betrieben geeignet_gulaschkanone eintopfofen grill_art brennstoff betrieben_sofort mobile kochmöglichkeit | | 212 | impfpflicht gesundheitswesen bundestag - unterschriften impfpflicht gesundheitswesen - impfstreikbündnis ärzte aufklärung - 000 unterschriften impfpflicht - unterschriften impfpflicht | 48 | 212_impfpflicht gesundheitswesen bundestag_unterschriften impfpflicht gesundheitswesen_impfstreikbündnis ärzte aufklärung_000 unterschriften impfpflicht | | 213 | erwarten russland schlachtfeld - krieg russland hineinziehen - kriegsführung usa russland - neuen krieg russland - krieg russland ganz | 48 | 213_erwarten russland schlachtfeld_krieg russland hineinziehen_kriegsführung usa russland_neuen krieg russland | | 214 | blick rundbrief abonnieren - rundbrief abonnieren __________________ - rundbrief abonnieren wer - 7114 pobnskba crypto - rundbrief abonnieren | 48 | 214_blick rundbrief abonnieren_rundbrief abonnieren ___________________rundbrief abonnieren wer_7114 pobnskba crypto | | 215 | voraussetzungen grundrechtseingriffe erforderlichkeit - ermächtigungsgesetz gesundheitsminister möglichkeit - grundrechtseingriffe erforderlichkeit eignung - grundrechtseingriffe erforderlichkeit - erscheinen ermächtigungsgesetz gesundheitsminister | 48 | 215_voraussetzungen grundrechtseingriffe erforderlichkeit_ermächtigungsgesetz gesundheitsminister möglichkeit_grundrechtseingriffe erforderlichkeit eignung_grundrechtseingriffe erforderlichkeit | | 216 | sagt unwahrheit deutschlands - unwahrheit deutschlands - unwahrheit deutschlands inzidenz - ungeimpfte nehmen deutschland - deutschland mittlerweile durchgeimpft | 48 | 216_sagt unwahrheit deutschlands_unwahrheit deutschlands_unwahrheit deutschlands inzidenz_ungeimpfte nehmen deutschland | | 217 | wirtschafts korruptionsstaatsanwaltschaft wksta - korruptionsstaatsanwaltschaft wksta - wirtschafts korruptionsstaatsanwaltschaft - korruptionsstaatsanwaltschaft - türkis schwarzen korruptionssümpfe | 47 | 217_wirtschafts korruptionsstaatsanwaltschaft wksta_korruptionsstaatsanwaltschaft wksta_wirtschafts korruptionsstaatsanwaltschaft_korruptionsstaatsanwaltschaft | | 218 | präsident zelensky - ukrainische präsident - kutten schikaniert russland - kreisen unterstützte präsident - zelenskyy gefolgsmann klaus | 47 | 218_präsident zelensky_ukrainische präsident_kutten schikaniert russland_kreisen unterstützte präsident | | 219 | teil redakteurin interview - aufgezeichnet bricht interview - freunde video kanal - video freunde video - stimmungsvolle interview | 47 | 219_teil redakteurin interview_aufgezeichnet bricht interview_freunde video kanal_video freunde video | | 220 | sahara staub magnetisch - bezüglich sahara staub - 2022 sahara staub - zusendung bezüglich sahara - sahara staub | 47 | 220_sahara staub magnetisch_bezüglich sahara staub_2022 sahara staub_zusendung bezüglich sahara | | 221 | polizeikette bestehend vorwiegend - vorwiegend polizistinnen durchbrochen - polizei stellt absichtlich - vorwiegend polizistinnen - polizei stellt | 47 | 221_polizeikette bestehend vorwiegend_vorwiegend polizistinnen durchbrochen_polizei stellt absichtlich_vorwiegend polizistinnen | | 222 | impfstoff novavax tatsächlich - hoffnung novavax impfstoff - impfstoff zugelassen novavax - genau impfstoff novavax - neuen novavax impfstoff | 47 | 222_impfstoff novavax tatsächlich_hoffnung novavax impfstoff_impfstoff zugelassen novavax_genau impfstoff novavax | | 223 | widerstandsfähigem grauem pulverbeschichtetem - pulverbeschichtetem rostfreiem metall - rostfreiem metall tragegriffe - metall tragegriffe isoliert - rostfreiem metall | 47 | 223_widerstandsfähigem grauem pulverbeschichtetem_pulverbeschichtetem rostfreiem metall_rostfreiem metall tragegriffe_metall tragegriffe isoliert | | 224 | immerwährende neutralität österreichs - aufgabe österreichischen neutralität - österreichischen neutralität redet - neutralität österreichs unserer - neutralität österreichs verhandelbar | 47 | 224_immerwährende neutralität österreichs_aufgabe österreichischen neutralität_österreichischen neutralität redet_neutralität österreichs unserer | | 225 | gaststätten siehe berliner - vielgeprüftes österreich - müllerstraße blumenstraße friedlich - zug müllerstraße - vielgerühmtes österreich | 47 | 225_gaststätten siehe berliner_vielgeprüftes österreich_müllerstraße blumenstraße friedlich_zug müllerstraße | | 226 | zensur harmloses stäbchen - zensur findet statt - censorship zensur - zensur transparent weiterhin - zensur strikes löschungen | 47 | 226_zensur harmloses stäbchen_zensur findet statt_censorship zensur_zensur transparent weiterhin | | 227 | 19q8odiu2zar7dfl18ouqivwauvnripceu bitcoin - 19q8odiu2zar7dfl18ouqivwauvnripceu bitcoin sv - bitcoin core 19q8odiu2zar7dfl18ouqivwauvnripceu - core 19q8odiu2zar7dfl18ouqivwauvnripceu bitcoin - bitcoin sv 1wxoeuy6ghetkmurdiipllwvya1vh2iwa | 46 | 227_19q8odiu2zar7dfl18ouqivwauvnripceu bitcoin_19q8odiu2zar7dfl18ouqivwauvnripceu bitcoin sv_bitcoin core 19q8odiu2zar7dfl18ouqivwauvnripceu_core 19q8odiu2zar7dfl18ouqivwauvnripceu bitcoin | | 228 | draufgelegt verbreitet verschwörungstheorien - hüter einzigen wahrheit - einzigen wahrheit eigentlich - wahrheit eigentlich gibt - modernen szientizismus wissenschaft | 46 | 228_draufgelegt verbreitet verschwörungstheorien_hüter einzigen wahrheit_einzigen wahrheit eigentlich_wahrheit eigentlich gibt | | 229 | german oli kanal - wahrheit aufhalten deutsch - aufhalten deutsch - aufhalten deutsch german - german oli enavie | 46 | 229_german oli kanal_wahrheit aufhalten deutsch_aufhalten deutsch_aufhalten deutsch german | | 230 | weltweiten schulden hoch - weltweiten schulden - schulden hoch - euro neue schulden - staatsschulden | 46 | 230_weltweiten schulden hoch_weltweiten schulden_schulden hoch_euro neue schulden | | 231 | pharma erkenntnisse vorenthalten - pharmaindustrie aufweisen fpö - pharmaunternehmen millionen ärzte - besitzt kette apotheken - pharmaindustrie genauer angesehen | 46 | 231_pharma erkenntnisse vorenthalten_pharmaindustrie aufweisen fpö_pharmaunternehmen millionen ärzte_besitzt kette apotheken | | 232 | europäische gerichtshof eugh - geht deutsche bundesverfassungsgericht - europäische gerichtshof - europäische kommission verklagt - deutsche bundesverfassungsgericht | 45 | 232_europäische gerichtshof eugh_geht deutsche bundesverfassungsgericht_europäische gerichtshof_europäische kommission verklagt | | 233 | antarctica secret meeting - antarctica secret - explore obscurity antarctica - obscurity antarctica - going antarctica secret | 45 | 233_antarctica secret meeting_antarctica secret_explore obscurity antarctica_obscurity antarctica | | 234 | sprach junge frau - moralapostel spielt - endlich gerechtigkeit - ungeimpfter recht - buchtipp macht wahn | 45 | 234_sprach junge frau_moralapostel spielt_endlich gerechtigkeit_ungeimpfter recht | | 235 | anschließenden protestmarsch ring - heldenplatz anschließenden protestmarsch - anschließenden protestmarsch - protestmarsch ring melde - protestmarsch | 45 | 235_anschließenden protestmarsch ring_heldenplatz anschließenden protestmarsch_anschließenden protestmarsch_protestmarsch ring melde | | 236 | deutschland versuchen impfpflicht - rücknahme impfpflichtgesetzes - sofortige rücknahme impfpflichtgesetzes - gilt einrichtungsbezogene impfpflicht - griff impfpflicht richten | 45 | 236_deutschland versuchen impfpflicht_rücknahme impfpflichtgesetzes_sofortige rücknahme impfpflichtgesetzes_gilt einrichtungsbezogene impfpflicht | | 237 | österreich impfpflicht mehr - österreich setzt impfpflicht - wichtig österreich neutral - sei österreich impfpflicht - österreich neutral verhält | 45 | 237_österreich impfpflicht mehr_österreich setzt impfpflicht_wichtig österreich neutral_sei österreich impfpflicht | | 238 | implementiert kollektiven stockholmsyndrom - kollektiven stockholmsyndrom geheilt - mittels dauerpropaganda psyche - dauerpropaganda psyche - kollektiven stockholmsyndrom | 44 | 238_implementiert kollektiven stockholmsyndrom_kollektiven stockholmsyndrom geheilt_mittels dauerpropaganda psyche_dauerpropaganda psyche | | 239 | verbieten kindesmord ersten - kindesmord ersten monat - kindesmord - freedom act 2022 - geisteskrank werten strafverfolgung | 44 | 239_verbieten kindesmord ersten_kindesmord ersten monat_kindesmord_freedom act 2022 | | 240 | sowie steigende düngemittelpreise - düngemittelpreise aufgezehrt - erzeugerpreise landwirtschaftlicher produkte - erzeugerpreise landwirtschaftlicher - landwirtschaftliche produkte gestiegen | 44 | 240_sowie steigende düngemittelpreise_düngemittelpreise aufgezehrt_erzeugerpreise landwirtschaftlicher produkte_erzeugerpreise landwirtschaftlicher | | 241 | reinigen guardian wasserfilter - entwickelte wasserfilter - guardian wasserfilter - guardian wasserfilter ursprünglich - entwickelte wasserfilter globetrotter | 44 | 241_reinigen guardian wasserfilter_entwickelte wasserfilter_guardian wasserfilter_guardian wasserfilter ursprünglich | | 242 | demonstranten skandieren büro - innsbruck versammlung rednern - innsbruck versammlung - demonstranten skandieren - tausend demonstranten skandieren | 44 | 242_demonstranten skandieren büro_innsbruck versammlung rednern_innsbruck versammlung_demonstranten skandieren | | 243 | trockenbrennstofftabletten 20 tabletten - esbit trockenbrennstofftabletten - esbit trockenbrennstofftabletten 20 - trockenbrennstofftabletten - trockenbrennstofftabletten 20 | 44 | 243_trockenbrennstofftabletten 20 tabletten_esbit trockenbrennstofftabletten_esbit trockenbrennstofftabletten 20_trockenbrennstofftabletten | | 244 | erz schwurbler danke - bitte graz danke - schwurbler danke - dank bitte - danke reutlingen bitte | 44 | 244_erz schwurbler danke_bitte graz danke_schwurbler danke_dank bitte | | 245 | weltweite corona krise - corona krise lockdowns - sechs monaten reduziert - monaten reduziert stehen - längeren zeitraum mindestens | 44 | 245_weltweite corona krise_corona krise lockdowns_sechs monaten reduziert_monaten reduziert stehen | | 246 | trinkwasserqualität innovative - maximale trinkwasserqualität innovative - trinkwasserqualität innovative aufbereitungstechnologie - genießen maximale trinkwasserqualität - profitieren sauberem wasser | 44 | 246_trinkwasserqualität innovative_maximale trinkwasserqualität innovative_trinkwasserqualität innovative aufbereitungstechnologie_genießen maximale trinkwasserqualität | | 247 | kauf edelmetallen russland - edelmetallen russland normalerweise - edelmetallen russland - russland treibt entdollarisierung - inflation schützen moskau | 44 | 247_kauf edelmetallen russland_edelmetallen russland normalerweise_edelmetallen russland_russland treibt entdollarisierung | | 248 | intensivpatienten - kliniken denen ständig - intensiv belegungen wurden - intensivbetten - intensiv belegungen | 44 | 248_intensivpatienten_kliniken denen ständig_intensiv belegungen wurden_intensivbetten | | 249 | mitglied greenpeace - meinung ungeimpfte pflegekräfte - bewaffneten kinder suv - verzichten fahren pappschildern - fahren pappschildern bewaffneten | 44 | 249_mitglied greenpeace_meinung ungeimpfte pflegekräfte_bewaffneten kinder suv_verzichten fahren pappschildern | | 250 | kundgebung sonntag 13 - hauptbahnhof nächste versammlung - nächste versammlung - heldenplatz sei - 13 uhr hauptbahnhof | 44 | 250_kundgebung sonntag 13_hauptbahnhof nächste versammlung_nächste versammlung_heldenplatz sei | | 251 | funktioniert investieren anmelden - investieren anmelden teilnehmen - anmelden heute kryptomarkt - diejenigen bereit investieren - investmentspecial 000 10 | 44 | 251_funktioniert investieren anmelden_investieren anmelden teilnehmen_anmelden heute kryptomarkt_diejenigen bereit investieren | | 252 | goldgedeckten währung - leiter global currency - börsenschluss freitag - global currency - börsenschluss freitag 11 | 44 | 252_goldgedeckten währung_leiter global currency_börsenschluss freitag_global currency | | 253 | facebook wegen russischen - russland instagram - hass gewaltaufrufen russische - gewaltaufrufen russische - gewaltandrohungen russische | 43 | 253_facebook wegen russischen_russland instagram_hass gewaltaufrufen russische_gewaltaufrufen russische | | 254 | deutsches gericht bundesverfassungsgericht - beim bundesverfassungsgericht karlsruhe - karlsruhe bundesverfassungsgericht freitag - höchstes deutsches gericht - harbarth amtierenden präsidenten | 43 | 254_deutsches gericht bundesverfassungsgericht_beim bundesverfassungsgericht karlsruhe_karlsruhe bundesverfassungsgericht freitag_höchstes deutsches gericht | | 255 | unterkunft flüchtlinge ukraine - vergewaltigte ukrainerin - ukrainerin flüchtet - flüchtlinge ukraine dient - ukrainerin opfer sexuellen | 43 | 255_unterkunft flüchtlinge ukraine_vergewaltigte ukrainerin_ukrainerin flüchtet_flüchtlinge ukraine dient | | 256 | ergeben cdl wirksam - obwohl wirkung cdl - cdl wirksam - cdl obwohl wirkung - wirkung cdl | 43 | 256_ergeben cdl wirksam_obwohl wirkung cdl_cdl wirksam_cdl obwohl wirkung | | 257 | netzfund ichfragfüreinenfreund - netzfund netzfund ichfragfüreinenfreund - netzfund ichfragfüreinenfreund netzfund - netzwerke netzfund weltgeld - danke netzfund netzfund | 43 | 257_netzfund ichfragfüreinenfreund_netzfund netzfund ichfragfüreinenfreund_netzfund ichfragfüreinenfreund netzfund_netzwerke netzfund weltgeld | | 258 | läßt ultimatum cdu - vorschlag parteipräsidiums zuvor - cdu vorstands partei - künftig parteimitglied - ex verfassungsschutzchef maaßen | 43 | 258_läßt ultimatum cdu_vorschlag parteipräsidiums zuvor_cdu vorstands partei_künftig parteimitglied | | 259 | schwul allein reicht - lösung transhumanisten menschen - transhumanisten menschen - weißes leider verschwörungstheorie - schwul | 43 | 259_schwul allein reicht_lösung transhumanisten menschen_transhumanisten menschen_weißes leider verschwörungstheorie | | 260 | sendungen russland einstellen - sehen russland beschließt - strafen falschbehauptungen russische - russland einstellen - sendungen russland einzustellen | 43 | 260_sendungen russland einstellen_sehen russland beschließt_strafen falschbehauptungen russische_russland einstellen | | 261 | täglich news hintergrundinfos - bericht täglich news - täglich news - news show - mehr news show | 43 | 261_täglich news hintergrundinfos_bericht täglich news_täglich news_news show | | 262 | batteriegespeisten stromgeneratoren powerstation - versorgt enorme akkukapazität - neue maßstäbe batteriegespeisten - enorme akkukapazität - powerstation vielzahl geräten | 42 | 262_batteriegespeisten stromgeneratoren powerstation_versorgt enorme akkukapazität_neue maßstäbe batteriegespeisten_enorme akkukapazität | | 263 | russiagate zugriff sozialenmedien - medien russland ergreifen - russiagate zugriff - internet russland - verbreitung beiträgen russischen | 42 | 263_russiagate zugriff sozialenmedien_medien russland ergreifen_russiagate zugriff_internet russland | | 264 | eigentlich verschwörungstheorie galt - schon realität entlarvung - eigentlich verschwörungstheorie - mal corona betrifft - verschwörungstheorie bewahrheitet hätte | 42 | 264_eigentlich verschwörungstheorie galt_schon realität entlarvung_eigentlich verschwörungstheorie_mal corona betrifft | | 265 | kaffee liegt guayusa - unterschied mate kaffee - kaffees energydrinks - lieblingskaffees tees besonders - kaffees energydrinks koffein | 42 | 265_kaffee liegt guayusa_unterschied mate kaffee_kaffees energydrinks_lieblingskaffees tees besonders | | 266 | inzwischen meisten patienten - meisten patienten corona - anzahl corona patienten - meisten patienten - kämen wegen coronainfektion | 42 | 266_inzwischen meisten patienten_meisten patienten corona_anzahl corona patienten_meisten patienten | | 267 | lieferung kampfjets ukraine - kampfjets ukraine europas - darüber kampfjets ukraine - lask polnische kampfflugzeuge - kampfjets ukraine | 42 | 267_lieferung kampfjets ukraine_kampfjets ukraine europas_darüber kampfjets ukraine_lask polnische kampfflugzeuge | | 268 | ukraine energiebereich unterstützen - eu ukraine energiebereich - ukraine energiebereich - ukraine teil europas - bereich ukraine | 42 | 268_ukraine energiebereich unterstützen_eu ukraine energiebereich_ukraine energiebereich_ukraine teil europas | | 269 | magnesium lebenswichtig - magnesiummangel - bleiben magnesium - magnesium - überlebenstechniken überleben extremsituationen | 41 | 269_magnesium lebenswichtig_magnesiummangel_bleiben magnesium_magnesium | | 270 | russland zögerlich wirklich - russland zögerlich - biden russland - bezug russland zögerlich - biden bezug russland | 41 | 270_russland zögerlich wirklich_russland zögerlich_biden russland_bezug russland zögerlich | | 271 | weu8zk4uw78km8capd5rjdc06q28j370 hex 0xd449694348b1d618eca2829bbc901782f5172689 - 0xd449694348b1d618eca2829bbc901782f5172689 emc2 exx4kk9pzlx7uilwncxtp7imkjtq6o5b6r - weu8zk4uw78km8capd5rjdc06q28j370 hex - weu8zk4uw78km8capd5rjdc06q28j370 - duran addr1v94ayqu53uklgqnn6c4x4weu8zk4uw78km8capd5rjdc06q28j370 hex | 41 | 271_weu8zk4uw78km8capd5rjdc06q28j370 hex 0xd449694348b1d618eca2829bbc901782f5172689_0xd449694348b1d618eca2829bbc901782f5172689 emc2 exx4kk9pzlx7uilwncxtp7imkjtq6o5b6r_weu8zk4uw78km8capd5rjdc06q28j370 hex_weu8zk4uw78km8capd5rjdc06q28j370 | | 272 | mehrere fotos anlässen - unregierbar foto seid - unregierbar foto - seid unregierbar foto - foto seid unregierbar | 41 | 272_mehrere fotos anlässen_unregierbar foto seid_unregierbar foto_seid unregierbar foto | | 273 | biochemiker heilpraktiker hippokratischen - heilpraktiker hippokratischen - pharmaindustrie schulmedizin hinaus - heilpraktiker hippokratischen eid - pharmaindustrie schulmedizin | 41 | 273_biochemiker heilpraktiker hippokratischen_heilpraktiker hippokratischen_pharmaindustrie schulmedizin hinaus_heilpraktiker hippokratischen eid | | 274 | erklärungen werkes lichte - werkes lichte wahrheit - jenseits erklärungen werkes - sinnzusammenhänge schöpfung leben - werk lichte wahrheit | 41 | 274_erklärungen werkes lichte_werkes lichte wahrheit_jenseits erklärungen werkes_sinnzusammenhänge schöpfung leben | | 275 | beschlossen impfpflicht österreich - impfpflicht österreich umgehend - impfpflicht österreich - offenbar verantwortungsträger impfzwang - impfpflicht regelrechte schande | 41 | 275_beschlossen impfpflicht österreich_impfpflicht österreich umgehend_impfpflicht österreich_offenbar verantwortungsträger impfzwang | | 276 | alex audio - alexander alex audio - music contribution peter - alex audio podcasts - peter music tägliche | 41 | 276_alex audio_alexander alex audio_music contribution peter_alex audio podcasts | | 277 | feuer wetter zuverlässigkeit - anzünden feuerstahl denkbar - ermöglichen entzünden feuer - feuerstahl denkbar - entwickelt lässt feuerstahl | 41 | 277_feuer wetter zuverlässigkeit_anzünden feuerstahl denkbar_ermöglichen entzünden feuer_feuerstahl denkbar | | 278 | konservierungsmethode meisten nährstoffe - begeistern neuen infrarot - meisten nährstoffe erhalten - haltbar nutzen konservierungsmethode - neuen infrarot dörrautomat | 41 | 278_konservierungsmethode meisten nährstoffe_begeistern neuen infrarot_meisten nährstoffe erhalten_haltbar nutzen konservierungsmethode | | 279 | assanges auslieferung - assanges - auslieferung wikileaks gründer - auslieferung wikileaks - wikileaks gründer | 40 | 279_assanges auslieferung_assanges_auslieferung wikileaks gründer_auslieferung wikileaks | | 280 | ende deutschland wasserstandsmeldung - deutschland wasserstandsmeldung - boden zerstörten deutschland - zerstörten deutschland - zerstörten deutschland ende | 40 | 280_ende deutschland wasserstandsmeldung_deutschland wasserstandsmeldung_boden zerstörten deutschland_zerstörten deutschland | | 281 | krisenfall ausfällen energie - heizung löschautomatik petroleumheizung - energie gas stromversorgung - petroleumheizung mobile - löschautomatik petroleumheizung | 40 | 281_krisenfall ausfällen energie_heizung löschautomatik petroleumheizung_energie gas stromversorgung_petroleumheizung mobile | | 282 | jahr 2021 brachte - eigentlich 2022 - 2021 brachte mehr - erwartet eigentlich 2022 - 2026 | 40 | 282_jahr 2021 brachte_eigentlich 2022_2021 brachte mehr_erwartet eigentlich 2022 | | 283 | kinderschänder österreich - daran kinderschänder österreich - kinderschänder österreich teil - österreich teil - geben geschändeten kindern | 40 | 283_kinderschänder österreich_daran kinderschänder österreich_kinderschänder österreich teil_österreich teil | | 284 | überleben globaler katastrophen - zeiten krisen katastrophen - krisen katastrophen - globaler katastrophen gemeint - globaler katastrophen | 40 | 284_überleben globaler katastrophen_zeiten krisen katastrophen_krisen katastrophen_globaler katastrophen gemeint | | 285 | stadt veranstalter demonstration - demonstrationsgeschehen mitbekommt - demonstrationsgeschehen mitbekommt menschen - daher polizeipräsenz demonstranten - weitere großdemonstration stattfinden | 40 | 285_stadt veranstalter demonstration_demonstrationsgeschehen mitbekommt_demonstrationsgeschehen mitbekommt menschen_daher polizeipräsenz demonstranten | | 286 | mehl verarbeitet getreidesorten - mahlen menschen gern - feines mehl verarbeitet - besonders feines mehl - mehl verarbeitet | 40 | 286_mehl verarbeitet getreidesorten_mahlen menschen gern_feines mehl verarbeitet_besonders feines mehl | | 287 | boden zerstörten deutschland - deutschland versteckten ziele - ende deutschland versteckten - zerstörten deutschland - zerstörten deutschland ende | 39 | 287_boden zerstörten deutschland_deutschland versteckten ziele_ende deutschland versteckten_zerstörten deutschland | | 288 | konservierungsmethode meisten nährstoffe - begeistern neuen infrarot - meisten nährstoffe erhalten - neuen infrarot - nährstoffe erhalten | 39 | 288_konservierungsmethode meisten nährstoffe_begeistern neuen infrarot_meisten nährstoffe erhalten_neuen infrarot | | 289 | soeben telegram gesperrt - telegram sperrt offensichtlich - telegram gesperrt - telegram gesperrt scheint - telegram schränkt | 39 | 289_soeben telegram gesperrt_telegram sperrt offensichtlich_telegram gesperrt_telegram gesperrt scheint | | 290 | fpö vehement impfpflicht - setzt impfpflicht obwohl - vehement impfpflicht einsetzt - aktuell einführung impfzwanges - vehement impfpflicht | 39 | 290_fpö vehement impfpflicht_setzt impfpflicht obwohl_vehement impfpflicht einsetzt_aktuell einführung impfzwanges | | 291 | personal greetings germany - greetings germany - greetings germany go - germany go patriots - germany | 39 | 291_personal greetings germany_greetings germany_greetings germany go_germany go patriots | | 292 | personal greetings germany - greetings germany - greetings germany go - germany go patriots - germany | 39 | 292_personal greetings germany_greetings germany_greetings germany go_germany go patriots | | 293 | februar 2023 montagsspaziergang - 2023 montagsspaziergang leinefelde - sache straße großen - straße großen - 2023 montagsspaziergang | 39 | 293_februar 2023 montagsspaziergang_2023 montagsspaziergang leinefelde_sache straße großen_straße großen | | 294 | nutzen versteckte ressourcen - versteckte ressourcen nutzen - nehmen versteckte ressourcen - ressourcen nutzen versteckte - wahr versteckte ressourcen | 39 | 294_nutzen versteckte ressourcen_versteckte ressourcen nutzen_nehmen versteckte ressourcen_ressourcen nutzen versteckte | | 295 | millionen tote weltweit - zensiert wurde artikel - bereits zensiert artikel - artikel zensiert wurde - tote weltweit | 39 | 295_millionen tote weltweit_zensiert wurde artikel_bereits zensiert artikel_artikel zensiert wurde | | 296 | 12 2021 düsseldorf - 2021 düsseldorf - düsseldorf 18 - düsseldorf 18 12 - münchen2212 | 39 | 296_12 2021 düsseldorf_2021 düsseldorf_düsseldorf 18_düsseldorf 18 12 | | 297 | link bestätigen müsst - wichtig bekommt bestätigungsmail - tragen zensurfreien newsletter - bekommt bestätigungsmail zugesandt - bestätigen müsst | 39 | 297_link bestätigen müsst_wichtig bekommt bestätigungsmail_tragen zensurfreien newsletter_bekommt bestätigungsmail zugesandt | | 298 | landesrat niederösterreich sagte - lockdowns österreich mehr - europa längsten schulschliessungen - landesrat niederösterreich - schülerin konsequenten asyl | 39 | 298_landesrat niederösterreich sagte_lockdowns österreich mehr_europa längsten schulschliessungen_landesrat niederösterreich | | 299 | gefährlichen regierung tun - gefährlichen regierung - regierung ausgedient rücktritt - berechnenden gefährlichen regierung - gerade regierung | 39 | 299_gefährlichen regierung tun_gefährlichen regierung_regierung ausgedient rücktritt_berechnenden gefährlichen regierung | | 300 | inzwischen gesundheitsminister - rauch neuer gesundheitsminister - rauch inzwischen gesundheitsminister - gesundheitsminister denke - gesundheitsminister geschichte | 38 | 300_inzwischen gesundheitsminister_rauch neuer gesundheitsminister_rauch inzwischen gesundheitsminister_gesundheitsminister denke | | 301 | schutzimpfungen coronavirus sars - zeigen schutzimpfungen coronavirus - schweren krankheitsverläufen schützen - krankheitsverläufen schützen führen - krankheitsverläufen schützen | 38 | 301_schutzimpfungen coronavirus sars_zeigen schutzimpfungen coronavirus_schweren krankheitsverläufen schützen_krankheitsverläufen schützen führen | | 302 | empfehlungen survival spezialisten - überleben sämtliche hilfsmittel - sämtliche hilfsmittel urlaubs - bestandteile empfehlungen survival - expeditionen krisengebieten bestens | 38 | 302_empfehlungen survival spezialisten_überleben sämtliche hilfsmittel_sämtliche hilfsmittel urlaubs_bestandteile empfehlungen survival | | 303 | katastrophe machttaktischen überlegungen - katastrophe beherrschende thema - starkregen katastrophe beherrschende - katastrophe wurde erst - ahrtal katastrophe wurde | 38 | 303_katastrophe machttaktischen überlegungen_katastrophe beherrschende thema_starkregen katastrophe beherrschende_katastrophe wurde erst | | 304 | stattfinden sagte lauterbach - sagte lauterbach bitte - sagte lauterbach - vielleicht herrn lauterbach - realistisch sagte lauterbach | 38 | 304_stattfinden sagte lauterbach_sagte lauterbach bitte_sagte lauterbach_vielleicht herrn lauterbach | | 305 | gedreht politik läuft - hut nehmen neuwahlen - längst beschlossen diktatur - regierung hut nehmen - nehmen neuwahlen auszurufen | 38 | 305_gedreht politik läuft_hut nehmen neuwahlen_längst beschlossen diktatur_regierung hut nehmen | | 306 | brachte buschauffeur - schon mut busfahrer - november brachte buschauffeur - brachte buschauffeur botschaft - linz schon mut | 38 | 306_brachte buschauffeur_schon mut busfahrer_november brachte buschauffeur_brachte buschauffeur botschaft | | 307 | schöne volkslied lied - schöne volkslied - song draußen diktieren - song draußen - song ehemals | 38 | 307_schöne volkslied lied_schöne volkslied_song draußen diktieren_song draußen | | 308 | redaktion entscheidung getroffen - leiteten angeklagten schreiber - artikel berliner zeitung - publizieren - berichtet viele leser | 38 | 308_redaktion entscheidung getroffen_leiteten angeklagten schreiber_artikel berliner zeitung_publizieren | | 309 | deutsch german hallo - german hallo meinung - german hallo - nürnberg peter weber - freies denken politische | 38 | 309_deutsch german hallo_german hallo meinung_german hallo_nürnberg peter weber | | 310 | ordi bald bald - ordi bald - ohoooho betreff - augkleber ordi - augkleber ordi bald | 38 | 310_ordi bald bald_ordi bald_ohoooho betreff_augkleber ordi | | 311 | uhr versammlungsort karlsplatz - 00 uhr schwarzenbergplatz - uhr ekz leibnitz - uhr schwarzenbergplatz life - leibnitz 17 00 | 37 | 311_uhr versammlungsort karlsplatz_00 uhr schwarzenbergplatz_uhr ekz leibnitz_uhr schwarzenbergplatz life | | 312 | russischer patienten aufgrund - russischer patienten - behandlung russischer patienten - russische patienten - behandlung russischer | 37 | 312_russischer patienten aufgrund_russischer patienten_behandlung russischer patienten_russische patienten | | 313 | verschwörungstheoretikern impfpflicht nehmen - ausschluss verschwörungstheoretikern impfpflicht - verschwörungstheoretikern impfpflicht - land geiselhaft impfpflicht - versprechen durchsetzung impfzwangs | 37 | 313_verschwörungstheoretikern impfpflicht nehmen_ausschluss verschwörungstheoretikern impfpflicht_verschwörungstheoretikern impfpflicht_land geiselhaft impfpflicht | | 314 | sonntag liebe - sonntag liebe freundinnen - freitag liebe freundinnen - liebe freundinnen freunde - wundervollen samstag | 37 | 314_sonntag liebe_sonntag liebe freundinnen_freitag liebe freundinnen_liebe freundinnen freunde | | 315 | 000 friedliche demonstranten - friedliche demonstranten heute - unteilbar verhandelbar protest - demonstranten heute lautstarkes - protest samstag voller | 37 | 315_000 friedliche demonstranten_friedliche demonstranten heute_unteilbar verhandelbar protest_demonstranten heute lautstarkes | | 316 | natürlich kurz gesellschaftliche - kurz gesellschaftliche situation - politik leitmedien vertretene - städtegruppen telegram - menschen initiativen | 37 | 316_natürlich kurz gesellschaftliche_kurz gesellschaftliche situation_politik leitmedien vertretene_städtegruppen telegram | | 317 | wireless charging ermöglicht - qi wireless charging - aufgeladen powerbank bietet - wireless charging - aufgeladen powerbank | 37 | 317_wireless charging ermöglicht_qi wireless charging_aufgeladen powerbank bietet_wireless charging | | 318 | ignazbearth budapest - hoffentlich ungarischen - 15 03 budapest - 03 budapest steht - 03 budapest | 37 | 318_ignazbearth budapest_hoffentlich ungarischen_15 03 budapest_03 budapest steht | | 319 | ukrainer millionensummen biden - biden damaligen ukrainischen - warum zahlten ukrainer - ukraine skandal us - us präsidentenfamilie bidenim | 37 | 319_ukrainer millionensummen biden_biden damaligen ukrainischen_warum zahlten ukrainer_ukraine skandal us | | 320 | wahrheit bombenterror 78 - lieferbar wahrheit bombenterror - wahrheit bombenterror - bombenterror 78 jahren - bombenterror | 37 | 320_wahrheit bombenterror 78_lieferbar wahrheit bombenterror_wahrheit bombenterror_bombenterror 78 jahren | | 321 | flasche autark lampenöl - autark lampenöl ausgießtülle - lampenöl ausgießtülle versehen - lampenöl ausgießtülle - vorteile autark lampenöl | 37 | 321_flasche autark lampenöl_autark lampenöl ausgießtülle_lampenöl ausgießtülle versehen_lampenöl ausgießtülle | | 322 | liebe mitmenschen - lieblingsmenschen endlich - lebenswerte zukunft frieden - lieblingsmenschen endlich erfüllung - liebe immer | 37 | 322_liebe mitmenschen_lieblingsmenschen endlich_lebenswerte zukunft frieden_lieblingsmenschen endlich erfüllung | | 323 | gelassener fühlst zudem - fühlst zudem wacher - steigert fruchtbarkeit ja - verdauung läuft hochtouren - libido geringeres stressempfinden | 37 | 323_gelassener fühlst zudem_fühlst zudem wacher_steigert fruchtbarkeit ja_verdauung läuft hochtouren | | 324 | immer bombe entschärfe - telegram folge rabbit - rabbit research telegram - erstmal sicher beschriftungen - überhaupt immer bombe | 37 | 324_immer bombe entschärfe_telegram folge rabbit_rabbit research telegram_erstmal sicher beschriftungen | | 325 | untersagt verbreitung russia - weswegen russland eingreifen - russischen propagandakanäle - feindstaatenklausel weswegen russland - russland eingetreten deutschland | 37 | 325_untersagt verbreitung russia_weswegen russland eingreifen_russischen propagandakanäle_feindstaatenklausel weswegen russland | | 326 | freistaat bayern katastrophenfall - blackouts gibt stadt - straßensperren ausgangssperren passieren - russland ukraine primär - erlass katastrophenfalls überschrift | 37 | 326_freistaat bayern katastrophenfall_blackouts gibt stadt_straßensperren ausgangssperren passieren_russland ukraine primär | | 327 | montagabend massiver cyberangriff - websites israelischen regierung - cyberangriffs - cyberangriff - cyberangriff websites | 37 | 327_montagabend massiver cyberangriff_websites israelischen regierung_cyberangriffs_cyberangriff | | 328 | videos odysee - videos catherine thurner - catherines blick youtube - videos catherine - schauen veezee video | 37 | 328_videos odysee_videos catherine thurner_catherines blick youtube_videos catherine | | 329 | ungarische ministerpräsident orban - staatliche ungarische - ungarische ministerpräsident - ungarns ministerpräsident viktor - ungarns ministerpräsident | 37 | 329_ungarische ministerpräsident orban_staatliche ungarische_ungarische ministerpräsident_ungarns ministerpräsident viktor | | 330 | media see stew - stew social media - see stew - follow stew social - world zelenko | 37 | 330_media see stew_stew social media_see stew_follow stew social | | 331 | toward helping us - health ranger store - helping us - helping us achieve - helping create better | 37 | 331_toward helping us_health ranger store_helping us_helping us achieve | | 332 | untersuchungsausschuss vorgänge jahrhundertkatastrophe - untersuchungsausschuss ahrtal katastrophe - flutkatastrophe regierungsverantwortung damals - menschen flutkatastrophe regierungsverantwortung - jahrhundertkatastrophe ahrtal aufklären | 36 | 332_untersuchungsausschuss vorgänge jahrhundertkatastrophe_untersuchungsausschuss ahrtal katastrophe_flutkatastrophe regierungsverantwortung damals_menschen flutkatastrophe regierungsverantwortung | | 333 | live stream deutschland - live stream - streamen gettr live - tages live stream - eilmeldung live münchen | 36 | 333_live stream deutschland_live stream_streamen gettr live_tages live stream | | 334 | streik eintragen - teilen streik eintragen - organisiertes streikpotenzial brauchen - melde streik - braucht organisiertes streikpotenzial | 36 | 334_streik eintragen_teilen streik eintragen_organisiertes streikpotenzial brauchen_melde streik | | 335 | kritisieren mediziner - ärztekammerpräsidenten kritisieren mediziner - impfungen generell ablehnen - bevorstehende impfpflicht sowie - impfpflicht sowie auftreten | 36 | 335_kritisieren mediziner_ärztekammerpräsidenten kritisieren mediziner_impfungen generell ablehnen_bevorstehende impfpflicht sowie | | 336 | präsentiert nora hesse - präsentiert nora - vogt tina wenko - janotka bernhard riegler - isabelle janotka bernhard | 36 | 336_präsentiert nora hesse_präsentiert nora_vogt tina wenko_janotka bernhard riegler | | 337 | personal greetings germany - greetings germany - greetings germany go - germany go patriots - germany | 36 | 337_personal greetings germany_greetings germany_greetings germany go_germany go patriots | | 338 | mülltonne demokratie - genauso demokratie anstrebt - demokratie anstrebt - nein echte demokratie - direkte demokratie geht | 36 | 338_mülltonne demokratie_genauso demokratie anstrebt_demokratie anstrebt_nein echte demokratie | | 339 | generelle impfpflicht unverantwortliches - impfpflicht unverantwortliches - ausgeschlossen generelle impfpflicht - impfpflicht gesetz ehemaliger - impfpflicht unverantwortliches verbrechen | 36 | 339_generelle impfpflicht unverantwortliches_impfpflicht unverantwortliches_ausgeschlossen generelle impfpflicht_impfpflicht gesetz ehemaliger | | 340 | desinformationsakteure sanktionieren sagte - desinformations akteure sanktionieren - falschinformationen sanktionieren - staatliche zensur - zensurmaßnahmen | 36 | 340_desinformationsakteure sanktionieren sagte_desinformations akteure sanktionieren_falschinformationen sanktionieren_staatliche zensur | | 341 | depressionen nachdem sydney - quarantänecamp australien geflüchtet - brisbane australien - quarantänecamp australien - ruhigem herzen australien | 36 | 341_depressionen nachdem sydney_quarantänecamp australien geflüchtet_brisbane australien_quarantänecamp australien | | 342 | österreichische politik nimmt - österreichische politik - österreichs amtsräumen ausgehen - aufgepasst wissen österreich - gefallen österreichs amtsräumen | 36 | 342_österreichische politik nimmt_österreichische politik_österreichs amtsräumen ausgehen_aufgepasst wissen österreich | | 343 | demowerbung öffentlich rede - demowerbung öffentlich - plus demowerbung öffentlich - impfschäden kennen moderatorin - demonstrationen vorbeizuschauen redaktionsteam | 35 | 343_demowerbung öffentlich rede_demowerbung öffentlich_plus demowerbung öffentlich_impfschäden kennen moderatorin | | 344 | fettlösliches vitamin k2 - vitamin k2 mct - vital vitamin k2 - vitamin k2 d3 - fettlösliches vitamin mct | 35 | 344_fettlösliches vitamin k2_vitamin k2 mct_vital vitamin k2_vitamin k2 d3 | | 345 | petroleumheizung hierfür gute - petroleumheizung hierfür - vorteile petroleumheizung hierfür - folgende vorteile petroleumheizung - petroleumheizung | 35 | 345_petroleumheizung hierfür gute_petroleumheizung hierfür_vorteile petroleumheizung hierfür_folgende vorteile petroleumheizung | | 346 | kinderpornografie - kindesmissbrauch vorgeschoben - mehr zensur digitaler - kinderpornos - aufspüren aushebeln systemkritikern | 35 | 346_kinderpornografie_kindesmissbrauch vorgeschoben_mehr zensur digitaler_kinderpornos | | 347 | agrarexporte china machten - agrarexporte china - erlassen chinas kohlekrise - china breitet droht - chinas kohlekrise | 35 | 347_agrarexporte china machten_agrarexporte china_erlassen chinas kohlekrise_china breitet droht | | 348 | liebe heiligen geist - leben jesus - jesus christus - heiligen geist - jesus | 35 | 348_liebe heiligen geist_leben jesus_jesus christus_heiligen geist | | 349 | beiersdorf mehr discounter - lebensmittel discountern - händler - discountern - bestimmte markenprodukte mehr | 35 | 349_beiersdorf mehr discounter_lebensmittel discountern_händler_discountern | | 350 | haushaltsgeräte elektrowerkzeuge powerstation - benötigt maximal stunden - elektrowerkzeuge powerstation netzsteckdosen - nahezu haushaltsgeräte elektrowerkzeuge - müssen 1000 vollzeitstellen | 35 | 350_haushaltsgeräte elektrowerkzeuge powerstation_benötigt maximal stunden_elektrowerkzeuge powerstation netzsteckdosen_nahezu haushaltsgeräte elektrowerkzeuge | | 351 | komm bubble abonniere - geschwurbels kumm twitter - komm bubble newsletter - bubble abonniere newsletter - bubble newsletter abonnieren | 35 | 351_komm bubble abonniere_geschwurbels kumm twitter_komm bubble newsletter_bubble abonniere newsletter | | 352 | telegram sowie videokanal - videokanal media rebell - sowie videokanal media - videokanal eigener sache - kanal veröffentlicht video | 35 | 352_telegram sowie videokanal_videokanal media rebell_sowie videokanal media_videokanal eigener sache | | 353 | verwenden gruppe hören - funkgeräte pro gruppe - kanal gleiche verschlüsselung - solange gleichen kanal - gruppe hören miteinander | 34 | 353_verwenden gruppe hören_funkgeräte pro gruppe_kanal gleiche verschlüsselung_solange gleichen kanal | | 354 | batteriegespeisten stromgeneratoren powerstation - neue maßstäbe batteriegespeisten - maßstäbe batteriegespeisten - akkukapazität - powerstation stromvorrat speichern | 34 | 354_batteriegespeisten stromgeneratoren powerstation_neue maßstäbe batteriegespeisten_maßstäbe batteriegespeisten_akkukapazität | | 355 | diem guten morgen - guten morgen wünschen - guten morgen - guten morgen guten - guten morgen buon | 34 | 355_diem guten morgen_guten morgen wünschen_guten morgen_guten morgen guten | | 356 | aramid taktischen handschuhe - gepolstert robuste handschuhe - robuste handschuhe handfläche - handschuhe leder aramid - robuste handschuhe | 34 | 356_aramid taktischen handschuhe_gepolstert robuste handschuhe_robuste handschuhe handfläche_handschuhe leder aramid | | 357 | zusammen friedlich - zusammen friedlich zurück - holen zusammen friedlich - friedlich zurück freie - findet telegramgruppen | 34 | 357_zusammen friedlich_zusammen friedlich zurück_holen zusammen friedlich_friedlich zurück freie | | 358 | demonstrationen corona politik3 - fragt demonstrationen nützen - demonstrationen nützen demonstrationen - fragt demonstrationen - verbote corona demonstrationen | 34 | 358_demonstrationen corona politik3_fragt demonstrationen nützen_demonstrationen nützen demonstrationen_fragt demonstrationen | | 359 | bereits österreichs polizisten - hass polizeibeamten wien - rückhalt polizei - polizei aufzuhetzen zulassen - wertvollste österreich rückhalt | 34 | 359_bereits österreichs polizisten_hass polizeibeamten wien_rückhalt polizei_polizei aufzuhetzen zulassen | | 360 | unterstützung lion media - inhaber lion media - lion media paypal - lion media videos - lion media iban | 34 | 360_unterstützung lion media_inhaber lion media_lion media paypal_lion media videos | | 361 | schwedischen sonderweg referieren - nordens dänemark - dänemark - kopenhagen - städtische bevölkerung deutschland | 34 | 361_schwedischen sonderweg referieren_nordens dänemark_dänemark_kopenhagen | | 362 | impfstatus wurden hamburg - schweren impfschäden europa - ungeklärtem impfstatus wurden - 3452 neuinfektionen hamburg - bürgermeister peter tschentscher | 34 | 362_impfstatus wurden hamburg_schweren impfschäden europa_ungeklärtem impfstatus wurden_3452 neuinfektionen hamburg | | 363 | covid 19 erkrankung - woche vierten infektionswelle - vierten infektionswelle krankenhaus - infektionswelle krankenhaus eingeliefert - vierten infektionswelle | 34 | 363_covid 19 erkrankung_woche vierten infektionswelle_vierten infektionswelle krankenhaus_infektionswelle krankenhaus eingeliefert | | 364 | digitale euro kommt - digitaler euro lange - digitalen euro - digitaler euro - digitale euro | 34 | 364_digitale euro kommt_digitaler euro lange_digitalen euro_digitaler euro | | 365 | verklagt europäische kommission - europäische kommission - verklagt europäische - impfstoffe unsicher robert - covid impfstoffen veröffentlichen | 34 | 365_verklagt europäische kommission_europäische kommission_verklagt europäische_impfstoffe unsicher robert | | 366 | gefährlicher eu verordnung - eu verordnung verschlüsselte - verschobene eu verordnung - verordnung verschlüsselte - netzwerke entschlossener entgegenzutreten | 34 | 366_gefährlicher eu verordnung_eu verordnung verschlüsselte_verschobene eu verordnung_verordnung verschlüsselte | | 367 | österreichs ukraine krise - vfgh neutralität österreichs - neutralität österreichs ukraine - ukraine krise eu - mfg pressemitteilung bundesvorstand | 34 | 367_österreichs ukraine krise_vfgh neutralität österreichs_neutralität österreichs ukraine_ukraine krise eu | | 368 | saudi arabien china - saudi arabien scheint - saudi arabien - saudi arabien denkt - arabien erwägt | 34 | 368_saudi arabien china_saudi arabien scheint_saudi arabien_saudi arabien denkt | | 369 | jahren hilfsorganisationen katastrophenschutz - hilfsorganisationen katastrophenschutz - gekocht krisenvorsorge warum - warum haushalt bp - krisenvorsorge warum | 33 | 369_jahren hilfsorganisationen katastrophenschutz_hilfsorganisationen katastrophenschutz_gekocht krisenvorsorge warum_warum haushalt bp | | 370 | voraus herzlichen dank - dank bereits - herzlichen dank ganz - vielen dank bereits - herzlichen dank wirken | 33 | 370_voraus herzlichen dank_dank bereits_herzlichen dank ganz_vielen dank bereits | | 371 | lagerung schmeckt gut - bestens geeignet bp - lagerung schmeckt - einfache lagerung schmeckt - geeignet bp | 33 | 371_lagerung schmeckt gut_bestens geeignet bp_lagerung schmeckt_einfache lagerung schmeckt | | 372 | sowjetische operativ taktische - sowjetische operativ - gebiet gestarteten militärdrohne - schreibt sowjetische operativ - flugzeugfabrik charkow derzeit | 33 | 372_sowjetische operativ taktische_sowjetische operativ_gebiet gestarteten militärdrohne_schreibt sowjetische operativ | | 373 | part cosmic interview - via cosmic interview - cosmic interview clif - cosmic interview - cosmic interview khazarians | 33 | 373_part cosmic interview_via cosmic interview_cosmic interview clif_cosmic interview | | 374 | ausgezeichnetem edelmetalldepot drohenden - lade gerne gold - edelmetalldepot drohenden - gerne gold app - lade gold | 33 | 374_ausgezeichnetem edelmetalldepot drohenden_lade gerne gold_edelmetalldepot drohenden_gerne gold app | | 375 | sozialen medien congresswoman - zerlegt twitter zensorin - kongress zensur sozialen - zensur sozialen medien - medien congresswoman | 33 | 375_sozialen medien congresswoman_zerlegt twitter zensorin_kongress zensur sozialen_zensur sozialen medien | | 376 | institut entgeldsystem krankenhaus - entgeldsystem krankenhaus link - öffentlich zugänglicher abrechnungsdaten - medizinrecht autorin buches - medizinrecht autorin | 33 | 376_institut entgeldsystem krankenhaus_entgeldsystem krankenhaus link_öffentlich zugänglicher abrechnungsdaten_medizinrecht autorin buches | | 377 | mypillow discount code - discount code rvm - code rvm discount - lindell mypillow discount - ranger candy coffee | 33 | 377_mypillow discount code_discount code rvm_code rvm discount_lindell mypillow discount | | 378 | trump saudischen - saudi arabien mohammed - berührt saudische - schwert berührt saudische - saudi arabien | 33 | 378_trump saudischen_saudi arabien mohammed_berührt saudische_schwert berührt saudische | | 379 | erzählungen advent weihnachtszeit - advent weihnachtszeit erfreuen - weihnachtszeit tag erzählung - weihnachtszeit erfreuen heute - weihnachtszeit erfreuen | 33 | 379_erzählungen advent weihnachtszeit_advent weihnachtszeit erfreuen_weihnachtszeit tag erzählung_weihnachtszeit erfreuen heute | | 380 | spirituellen - leben spirituellen - spirituellen braucht fangt - gelassenheit leben spirituellen - spiritualität | 33 | 380_spirituellen_leben spirituellen_spirituellen braucht fangt_gelassenheit leben spirituellen | | 381 | reklame chip dr - chip nächste verschwörungstheorie - sache chip nächste - ungeniert reklame chip - reklame chip | 33 | 381_reklame chip dr_chip nächste verschwörungstheorie_sache chip nächste_ungeniert reklame chip | | 382 | gespann chef offizier - soldaten innen - chef offizier zivilen - lang warten ärzte - löste gespann chef | 33 | 382_gespann chef offizier_soldaten innen_chef offizier zivilen_lang warten ärzte | | 383 | weltfrauentag feministischer kampftag - weltfrauentag feministischer - weltfrauentag - feministischer kampftag - feministischer kampftag löwinnen | 33 | 383_weltfrauentag feministischer kampftag_weltfrauentag feministischer_weltfrauentag_feministischer kampftag | | 384 | optimistischer denke hoffe - schlimm prognosen regierung - prognosen regierung - immer markels optimismus - markels optimismus | 33 | 384_optimistischer denke hoffe_schlimm prognosen regierung_prognosen regierung_immer markels optimismus | | 385 | chips kanäle youtube - odysee gettr chips - gettr chips - odysee gettr youtube - youtube odysee gettr | 33 | 385_chips kanäle youtube_odysee gettr chips_gettr chips_odysee gettr youtube | | 386 | marktplatz hofheim 19 - löwenplatz dieburg 18 - marktplatz hofheim - marktplatz darmstadt 17 - 18 00 bürgerhaus | 33 | 386_marktplatz hofheim 19_löwenplatz dieburg 18_marktplatz hofheim_marktplatz darmstadt 17 | | 387 | sollen russischen truppen - kreml syrer rekrutiert - syrische kämpfer - syrische söldner - syrienkriegs handelt idlib | 33 | 387_sollen russischen truppen_kreml syrer rekrutiert_syrische kämpfer_syrische söldner | | 388 | russen berechtigt deutsches - beschlagnahmt superjacht russischem - deutschland beschlagnahmt superjacht - kauft russen enteignet - deutschland beschlagnahmt | 33 | 388_russen berechtigt deutsches_beschlagnahmt superjacht russischem_deutschland beschlagnahmt superjacht_kauft russen enteignet | | 389 | kündigung ungeimpfter pflegekräfte - bedrohlichen pflegenotstand aufgrund - viele pflegekräfte seit - bedrohlichen pflegenotstand - 000 pfleger fehlen | 33 | 389_kündigung ungeimpfter pflegekräfte_bedrohlichen pflegenotstand aufgrund_viele pflegekräfte seit_bedrohlichen pflegenotstand | | 390 | taylor punkt musiker - punkt musiker mehr - punkt musiker - musiker mehr - musiker mehr still | 33 | 390_taylor punkt musiker_punkt musiker mehr_punkt musiker_musiker mehr | | 391 | deeskalationsstrategie freuen danke - unermüdliche aufklärungsarbeit bedanken - danke unermüdlichen einsatz - dank wunderbares feedback - danke unermüdlichen | 33 | 391_deeskalationsstrategie freuen danke_unermüdliche aufklärungsarbeit bedanken_danke unermüdlichen einsatz_dank wunderbares feedback | | 392 | tun unterstützen plattform - zeit spenden melden - engagieren zeit spenden - tun unterstützen - spenden melden | 33 | 392_tun unterstützen plattform_zeit spenden melden_engagieren zeit spenden_tun unterstützen | | 393 | putins krieg serbische - serbiens beliebtester - belgrad serbiens beliebtester - serbiens beliebtester fußballmannschaft - serbien zehntausende fans | 33 | 393_putins krieg serbische_serbiens beliebtester_belgrad serbiens beliebtester_serbiens beliebtester fußballmannschaft | | 394 | nehammers immerwährenden neutralität - fortsetzung nehammer - chef nehammer - tagen bezeichnete nehammer - nehammer | 33 | 394_nehammers immerwährenden neutralität_fortsetzung nehammer_chef nehammer_tagen bezeichnete nehammer | | 395 | ersten wochen lockdowns - lockdown wurde - wochen lockdowns - lockdowns - lockdowns 2020 wenigsten | 33 | 395_ersten wochen lockdowns_lockdown wurde_wochen lockdowns_lockdowns | | 396 | funktioniert stromausfall blackout - stabo fc 850 - schwimmfähiges allwetter pmr - fc 850 handelt - beim fc 850 | 32 | 396_funktioniert stromausfall blackout_stabo fc 850_schwimmfähiges allwetter pmr_fc 850 handelt | | 397 | geht feiner humor - humor sonntagsbraten - humor feine waffe - wurden geschmälert humor - lustiges wenig humor | 32 | 397_geht feiner humor_humor sonntagsbraten_humor feine waffe_wurden geschmälert humor | | 398 | twitter facebook telegram - twitter facebook - facebook telegram arbeit - folgen sozial mediakanälen - sozial mediakanälen | 32 | 398_twitter facebook telegram_twitter facebook_facebook telegram arbeit_folgen sozial mediakanälen | | 399 | vertuscht vergessen 2022 - jahr 2021 brisante - vergessen 2022 2021 - 2021 brisante neuerscheinung - vergessen 2022 | 32 | 399_vertuscht vergessen 2022_jahr 2021 brisante_vergessen 2022 2021_2021 brisante neuerscheinung | | 400 | bereitzustellen russland - außenminister rjabkow russland - bereitzustellen russland kündigt - rjabkow russland sicherheitsdialog - ukraine bereitzustellen russland | 32 | 400_bereitzustellen russland_außenminister rjabkow russland_bereitzustellen russland kündigt_rjabkow russland sicherheitsdialog | | 401 | putin bestellen aktueller - putin bestellen - euro verschenken sonntag - shop bestellen heft - bestellen aktueller | 32 | 401_putin bestellen aktueller_putin bestellen_euro verschenken sonntag_shop bestellen heft | | 402 | demo gegendemonstration abbiegen - demo gegendemonstration - übersicht demos heute - demos veranstaltungen etc - gegendemonstration abbiegen | 32 | 402_demo gegendemonstration abbiegen_demo gegendemonstration_übersicht demos heute_demos veranstaltungen etc | | 403 | live stream deutschland - twitter gettr live - live stream - tages live stream - livestream | 32 | 403_live stream deutschland_twitter gettr live_live stream_tages live stream | | 404 | wasserfilter hält extrem - alleskönner wasserfilter hält - absoluter alleskönner wasserfilter - alleskönner wasserfilter - wasserfilter hält | 32 | 404_wasserfilter hält extrem_alleskönner wasserfilter hält_absoluter alleskönner wasserfilter_alleskönner wasserfilter | | 405 | überzogene vorgehen bereitschaftspolizei - bereitschaftspolizei demo sorgt - vorgehen bereitschaftspolizei demo - vorgehen bereitschaftspolizei - bereitschaftspolizei demo | 32 | 405_überzogene vorgehen bereitschaftspolizei_bereitschaftspolizei demo sorgt_vorgehen bereitschaftspolizei demo_vorgehen bereitschaftspolizei | | 406 | müssen fenix stirnlampe - perfekte lampe - perfekte lampe beide - lampe beide - fenix stirnlampe perfekte | 32 | 406_müssen fenix stirnlampe_perfekte lampe_perfekte lampe beide_lampe beide | | 407 | deutschland schweigtursprünglich beitrag - impfpass deutschland schweigtursprünglich - deutschland schweigtursprünglich - wahlen verbindung deutschland - verbindung deutschland | 32 | 407_deutschland schweigtursprünglich beitrag_impfpass deutschland schweigtursprünglich_deutschland schweigtursprünglich_wahlen verbindung deutschland | | 408 | gesamtzahl todesfälle senioren - todesfälle senioren 65 - 71 übersterblichkeit senioren - todesfälle senioren - übersterblichkeit senioren jahr | 32 | 408_gesamtzahl todesfälle senioren_todesfälle senioren 65_71 übersterblichkeit senioren_todesfälle senioren | | 409 | berlin zumindest - berlin zumindest umkreis - 22 berlin zumindest - reichstags berlin - denkt dran berlin | 32 | 409_berlin zumindest_berlin zumindest umkreis_22 berlin zumindest_reichstags berlin | | 410 | jobplattform jobsuche füreinefreieimpfentscheidung - jobsuche füreinefreieimpfentscheidung - telegram seite jobangebote - seite jobangebote - jobplattform | 32 | 410_jobplattform jobsuche füreinefreieimpfentscheidung_jobsuche füreinefreieimpfentscheidung_telegram seite jobangebote_seite jobangebote | | 411 | per banküberweisung schweiz - banküberweisung österreich iban - banküberweisung schweiz - per banküberweisung österreich - credit suisse zürich | 32 | 411_per banküberweisung schweiz_banküberweisung österreich iban_banküberweisung schweiz_per banküberweisung österreich | | 412 | bakterienkulturen gebildet beste - lebensmittel fermentieren haltbar - wurden lebensmittel fermentieren - fermentieren haltbar gemacht - bakterienkulturen gebildet | 32 | 412_bakterienkulturen gebildet beste_lebensmittel fermentieren haltbar_wurden lebensmittel fermentieren_fermentieren haltbar gemacht | | 413 | fliegen verboten - müssen ryanair flügen - ende maskenpflicht flugzeugen - maskenpflicht flugzeugen - vorstandsvorsitzende billigfluggesellschaft ryanair | 32 | 413_fliegen verboten_müssen ryanair flügen_ende maskenpflicht flugzeugen_maskenpflicht flugzeugen | | 414 | schützende zuflucht geschaffen - effiziente schützende zuflucht - bietet notfall wärmespeicherung - schützende zuflucht - wärmespeicherung wirkt isolierend | 32 | 414_schützende zuflucht geschaffen_effiziente schützende zuflucht_bietet notfall wärmespeicherung_schützende zuflucht | | 415 | transportiert motor schallisoliertes - verstaut transportiert motor - häppchen weltregierung motor - motor schallisoliertes gehäuse - transportiert motor | 32 | 415_transportiert motor schallisoliertes_verstaut transportiert motor_häppchen weltregierung motor_motor schallisoliertes gehäuse | | 416 | staatshaftung sagte vorsitzende - staatshaftung sagte - lockdowns urteil bundesgerichtshofs - seien aufgabe staatshaftung - aufgabe staatshaftung sagte | 31 | 416_staatshaftung sagte vorsitzende_staatshaftung sagte_lockdowns urteil bundesgerichtshofs_seien aufgabe staatshaftung | | 417 | welt unabhängig kritisch - unabhängig kritisch unterstützen - welt unabhängig - kritisch unterstützen iban - partioten welt unabhängig | 31 | 417_welt unabhängig kritisch_unabhängig kritisch unterstützen_welt unabhängig_kritisch unterstützen iban | | 418 | 2023 attacken negativbewertungen - german aktuelle presseschau - 2021 attacken negativbewertungen - 2023 attacken - 02 2023 attacken | 31 | 418_2023 attacken negativbewertungen_german aktuelle presseschau_2021 attacken negativbewertungen_2023 attacken | | 419 | neun bundesländern politik - bundesländern politik möchte - mitzubringen politik trotzdem - ratschen mitzubringen politik - mitzubringen politik | 31 | 419_neun bundesländern politik_bundesländern politik möchte_mitzubringen politik trotzdem_ratschen mitzubringen politik | | 420 | impfen schwangerschaft - schwangeren frauen - thema impfen schwangerschaft - impfstoffe missbraucht - neuartiger impfstoffe missbraucht | 31 | 420_impfen schwangerschaft_schwangeren frauen_thema impfen schwangerschaft_impfstoffe missbraucht | | 421 | stinkstoffen erreicht abwehrspray - herkömmlichen pfeffersprays kombination - herkömmlichen pfeffersprays - neue abwehrspray bietet - abwehrspray bietet bessere | 31 | 421_stinkstoffen erreicht abwehrspray_herkömmlichen pfeffersprays kombination_herkömmlichen pfeffersprays_neue abwehrspray bietet | | 422 | michael kellner dr - klüssendorf dr - lehmann sylvia lehmann - klüssendorf dr bärbel - lauterbach sven lehmann | 31 | 422_michael kellner dr_klüssendorf dr_lehmann sylvia lehmann_klüssendorf dr bärbel | | 423 | gruppe klimaaktivisten - kleine gruppe klimaaktivisten - polizeieinsatz klimaaktivisten pinseln - polizeieinsatz klimaaktivisten - klimaaktivisten | 31 | 423_gruppe klimaaktivisten_kleine gruppe klimaaktivisten_polizeieinsatz klimaaktivisten pinseln_polizeieinsatz klimaaktivisten | | 424 | widerstand radikalisiert mehr - widerstand randgruppe öffentlich - radikalisiert mehr widerstand - volk jubelt widerstand - widerstand radikalisiert | 31 | 424_widerstand radikalisiert mehr_widerstand randgruppe öffentlich_radikalisiert mehr widerstand_volk jubelt widerstand | | 425 | millionen euro mehr - steigen seien mehrkosten - millionen euro pro - gehälter sollen - seien mehrkosten | 31 | 425_millionen euro mehr_steigen seien mehrkosten_millionen euro pro_gehälter sollen | | 426 | verfassungswidrig ausschließlich versammlungen - staatsrechtler kritisieren regeln - demonstranten wurden gehorsam - verfassungswidrig ausschließlich - freiheit ruft protestiert | 31 | 426_verfassungswidrig ausschließlich versammlungen_staatsrechtler kritisieren regeln_demonstranten wurden gehorsam_verfassungswidrig ausschließlich | | 427 | weltmacht russland krieg - russland vorsitz alliierten - deutschland direkt ukraine - russland krieg erklärt - direkt ukraine konflikt | 31 | 427_weltmacht russland krieg_russland vorsitz alliierten_deutschland direkt ukraine_russland krieg erklärt | | 428 | impfungen zusammenhängt baby - möglicherweise impfungen - vaccine adverse event - baby plötzlich stirbt - möglicherweise impfungen zusammenhängt | 31 | 428_impfungen zusammenhängt baby_möglicherweise impfungen_vaccine adverse event_baby plötzlich stirbt | | 429 | erlassen tierschutz hierzulande - schäferhunden per angst - erlassen tierschutz - skepsis beim gesundheitspersonal - artikel steht skepsis | 31 | 429_erlassen tierschutz hierzulande_schäferhunden per angst_erlassen tierschutz_skepsis beim gesundheitspersonal | | 430 | kindgerechten kriegs aufklärungsvideos - aufklärungsvideos schule ansehen - angeblich kindgerechten kriegs - kindgerechten kriegs - kriegs aufklärungsvideos schule | 31 | 430_kindgerechten kriegs aufklärungsvideos_aufklärungsvideos schule ansehen_angeblich kindgerechten kriegs_kindgerechten kriegs | | 431 | krieg ukraine chefsache - ukraine chefsache einziger - titel krieg ukraine - ukraine gespräch illner - ukraine chefsache | 31 | 431_krieg ukraine chefsache_ukraine chefsache einziger_titel krieg ukraine_ukraine gespräch illner | | 432 | global agierende kriegstreiber - grausamer krieg medien - erschütternden krieg - erschütternden krieg ende - krieg jemen | 31 | 432_global agierende kriegstreiber_grausamer krieg medien_erschütternden krieg_erschütternden krieg ende | | 433 | starlink satelliteninternetdienst - internet lieferung starlink - starlink internet - starlink internet terminals - system starlink internet | 31 | 433_starlink satelliteninternetdienst_internet lieferung starlink_starlink internet_starlink internet terminals | | 434 | falschbehauptungen unterstellen blamierten - versuch falschbehauptungen unterstellen - falschbehauptungen unterstellen - beim versuch falschbehauptungen - versuch falschbehauptungen | 31 | 434_falschbehauptungen unterstellen blamierten_versuch falschbehauptungen unterstellen_falschbehauptungen unterstellen_beim versuch falschbehauptungen | | 435 | wegen sahara - theorien kürzlichen saharastaub - saharastaub europa wirklich - kürzlichen saharastaub europa - sand schwefeldioxid | 31 | 435_wegen sahara_theorien kürzlichen saharastaub_saharastaub europa wirklich_kürzlichen saharastaub europa | | 436 | destress today stew - prepare family famine - support stew - family famine shortages - support stew peters | 31 | 436_destress today stew_prepare family famine_support stew_family famine shortages | | 437 | nudel pizzateig apfelmus - nudel pizzateig - pizzateig apfelmus - gaumenfreuden gemachte marmeladen - pizzateig | 31 | 437_nudel pizzateig apfelmus_nudel pizzateig_pizzateig apfelmus_gaumenfreuden gemachte marmeladen | | 438 | ohio katastrophe tschernobyl - ohio katastrophe - entgleisung ohio örtliche - katastrophe tschernobyl - entgleisung ohio | 31 | 438_ohio katastrophe tschernobyl_ohio katastrophe_entgleisung ohio örtliche_katastrophe tschernobyl | | 439 | nahrungsmitteln notfall geschützte - getreidetonne notvorräte sicher - aufbewahrung trockenen lebensmitteln - getreidetonne vielseitige sichere - notvorräte sicher lagern | 31 | 439_nahrungsmitteln notfall geschützte_getreidetonne notvorräte sicher_aufbewahrung trockenen lebensmitteln_getreidetonne vielseitige sichere | | 440 | gesperrt telegramzensur - telegram gesperrt telegramzensur - unzensiert telegram - vorsicht telegram greift - vorsicht telegram | 31 | 440_gesperrt telegramzensur_telegram gesperrt telegramzensur_unzensiert telegram_vorsicht telegram greift | | 441 | wichtiger je sicherheitsstiefel - optimalen halt knöchelbereich - ausgestattet sorgt - getragen besonders abriebfestes - bietet stiefel optimalen | 31 | 441_wichtiger je sicherheitsstiefel_optimalen halt knöchelbereich_ausgestattet sorgt_getragen besonders abriebfestes | | 442 | seite organspende gesetz - organspende gesetz - organspende gesetz 15 - sowie demokratie europa - schweiz referendum | 31 | 442_seite organspende gesetz_organspende gesetz_organspende gesetz 15_sowie demokratie europa | | 443 | liegen faschismus - faschismus gesprochen dabei - faschismus gesprochen - faschismus - liegen faschismus beginn | 30 | 443_liegen faschismus_faschismus gesprochen dabei_faschismus gesprochen_faschismus | | 444 | polizei sollten garant - hierarchisches system polizist - polizei sollten - gerade polizei sollten - system polizist identität | 30 | 444_polizei sollten garant_hierarchisches system polizist_polizei sollten_gerade polizei sollten | | 445 | bevorstehender impfstoff omicron - impfstoff omicron variante - impfstoff omicron - impfstoffe biontech moderna - bevorstehender impfstoff | 30 | 445_bevorstehender impfstoff omicron_impfstoff omicron variante_impfstoff omicron_impfstoffe biontech moderna | | 446 | mainstream medien aufgezeigt - medien aufgezeigt pflichtlektüre - medien aufgezeigt - großen mainstream medien - medien lügen corona | 30 | 446_mainstream medien aufgezeigt_medien aufgezeigt pflichtlektüre_medien aufgezeigt_großen mainstream medien | | 447 | rücktritt verbleibenden minister - verbleibenden minister kanzler - minister kanzler zwei - schmidt vernichtet karrieren - minister kanzler | 30 | 447_rücktritt verbleibenden minister_verbleibenden minister kanzler_minister kanzler zwei_schmidt vernichtet karrieren | | 448 | staatsfunkt tatsächlich weihnachtsamnestie - ungeimpfte spricht staatsfunkt - spricht staatsfunkt tatsächlich - staatsfunkt tatsächlich - bürger einfach ungeimpft | 30 | 448_staatsfunkt tatsächlich weihnachtsamnestie_ungeimpfte spricht staatsfunkt_spricht staatsfunkt tatsächlich_staatsfunkt tatsächlich | | 449 | atomkraftwerk südukrainischen - atomkraftwerk südukrainischen großstadt - größtem atomkraftwerk südukrainischen - atomkraftwerk russische - europas größtes atomkraftwerk | 30 | 449_atomkraftwerk südukrainischen_atomkraftwerk südukrainischen großstadt_größtem atomkraftwerk südukrainischen_atomkraftwerk russische | | 450 | unerträglich mittlerweile land - gleichgeschaltet innenminister hetzt - innenminister hetzt - medien gleichgeschaltet innenminister - gleichgeschaltet innenminister | 30 | 450_unerträglich mittlerweile land_gleichgeschaltet innenminister hetzt_innenminister hetzt_medien gleichgeschaltet innenminister | | 451 | gesundheit vitamin abwehrkräfte - vitamin d3 schützt - mal gesundheit vitamin - gesundheit vitamin - wirkt diabetes vitamin | 30 | 451_gesundheit vitamin abwehrkräfte_vitamin d3 schützt_mal gesundheit vitamin_gesundheit vitamin | | 452 | demos blick deutschland - audio hans joachim - rutter österreich demos - blick deutschland - joachim müller telegram | 30 | 452_demos blick deutschland_audio hans joachim_rutter österreich demos_blick deutschland | | 453 | liebe lichtgrüsse oberallgäu - lichtgrüße erzgebirge - oberallgäu lichtgrüße erzgebirge - licht raum füllt - oberallgäu lichtgrüße | 30 | 453_liebe lichtgrüsse oberallgäu_lichtgrüße erzgebirge_oberallgäu lichtgrüße erzgebirge_licht raum füllt | | 454 | zahlreicher presseanfragen haimbuchner - kritischer medien bspw - chefredakteur politisch korrekten - geghostet anfragen kritischer - journalisten kollegen beitrag | 30 | 454_zahlreicher presseanfragen haimbuchner_kritischer medien bspw_chefredakteur politisch korrekten_geghostet anfragen kritischer | | 455 | erschienen amazon lügt - bond gehört amazonum - amazon vertreiben deswegen - recht amazon wichtigster - erschienen amazon | 30 | 455_erschienen amazon lügt_bond gehört amazonum_amazon vertreiben deswegen_recht amazon wichtigster | | 456 | viele soldaten seien - soldaten seien krank - menschsein ab soldaten - soldaten seien - ort viele soldaten | 30 | 456_viele soldaten seien_soldaten seien krank_menschsein ab soldaten_soldaten seien | | 457 | polnische medien offensichtlich - starteten polnische medien - polnische medien - schreiben warschau verschweigt - warschau verschweigt | 30 | 457_polnische medien offensichtlich_starteten polnische medien_polnische medien_schreiben warschau verschweigt | | 458 | journalistin walter - walter hämmerle chefredakteur - chefredakteur wiener zeitung - journalistin walter hämmerle - bestsellerautorin moderatorin | 30 | 458_journalistin walter_walter hämmerle chefredakteur_chefredakteur wiener zeitung_journalistin walter hämmerle | | 459 | zuviele mitarbeiter geimpft - mitarbeiter gesundheitswesen bundesländern - betrieben zuviele mitarbeiter - pflegekräfte mitarbeiter gesundheitswesen - ausfallende mitarbeitern betrieb | 29 | 459_zuviele mitarbeiter geimpft_mitarbeiter gesundheitswesen bundesländern_betrieben zuviele mitarbeiter_pflegekräfte mitarbeiter gesundheitswesen | | 460 | tiktok instagram youtubedenkt - youtubedenkt dran denkt - instagram youtubedenkt - vorrübergehend gesperrt instagram - youtubedenkt | 29 | 460_tiktok instagram youtubedenkt_youtubedenkt dran denkt_instagram youtubedenkt_vorrübergehend gesperrt instagram | | 461 | bundespressekonferenz situation impfstoffen - gewünschten impfpflicht - impfstoffmangel - karl lauterbach befürchtet - tat wenig impfstoff | 29 | 461_bundespressekonferenz situation impfstoffen_gewünschten impfpflicht_impfstoffmangel_karl lauterbach befürchtet | | 462 | bioverfügbarkeit liposomale produkte - liposomale produkte - bioverfügbarkeit liposomale - liposomale produkte evolution - zellen bioverfügbarkeit liposomale | 29 | 462_bioverfügbarkeit liposomale produkte_liposomale produkte_bioverfügbarkeit liposomale_liposomale produkte evolution | | 463 | danke q74you angst - q74you angst schnelleinstieg - q74you kommst angst - st angst schnelleinstieg - danke q74you schnelleinstieg | 29 | 463_danke q74you angst_q74you angst schnelleinstieg_q74you kommst angst_st angst schnelleinstieg | | 464 | spaß ersatzdocht petroleumheizung - heizung löschautomatikunsere petroleum - löschautomatikunsere petroleum heizung - petroleum heizung löschautomatikunsere - praktischen petroleumbetriebenen heizung | 29 | 464_spaß ersatzdocht petroleumheizung_heizung löschautomatikunsere petroleum_löschautomatikunsere petroleum heizung_petroleum heizung löschautomatikunsere | | 465 | masken kinder atemwiderstand - masken sei kinder - masken kinder - masken erwachsene zugelassen - hoch masken erwachsene | 29 | 465_masken kinder atemwiderstand_masken sei kinder_masken kinder_masken erwachsene zugelassen | | 466 | thema russlands krieg - русофобия на украине - erniedrigung russischstämmigen leute - erniedrigung russischstämmigen - gibt thema russlands | 29 | 466_thema russlands krieg_русофобия на украине_erniedrigung russischstämmigen leute_erniedrigung russischstämmigen | | 467 | russischen überfall ukraine - russischen überfall - moskau kriegshandlungen - geben russland - mai erreicht russische | 29 | 467_russischen überfall ukraine_russischen überfall_moskau kriegshandlungen_geben russland | | 468 | impfstatus wurden ungeimpften - impfquoten direkt verglichen - impfstatus unbekannt 22 - impfstoffe zeitraum 27 - prozent fälle impfstatus | 29 | 468_impfstatus wurden ungeimpften_impfquoten direkt verglichen_impfstatus unbekannt 22_impfstoffe zeitraum 27 | | 469 | marburg virus eigentlich - marburg virus handelt - virus eigentlich marburg - sei gesundheitsalarm provinz - gesundheitsalarm provinz kié | 29 | 469_marburg virus eigentlich_marburg virus handelt_virus eigentlich marburg_sei gesundheitsalarm provinz | | 470 | root wellness c60evo - wellness c60evo - elliott life enhancing - wellness c60evo use - life enhancing natural | 29 | 470_root wellness c60evo_wellness c60evo_elliott life enhancing_wellness c60evo use | | 471 | zeitpunkt gesetzliche - verwaltungsstrafen 31 2022 - 2023 zeitpunkt gesetzliche - stgb freiheitsstrafe monaten - zeitpunkt gesetzliche grundlage | 29 | 471_zeitpunkt gesetzliche_verwaltungsstrafen 31 2022_2023 zeitpunkt gesetzliche_stgb freiheitsstrafe monaten | | 472 | twitter folge rabbit - rabbit research telegram - twitter pflegeheimen krankenhäusern - twitter pflegeheimen - folge rabbit research | 29 | 472_twitter folge rabbit_rabbit research telegram_twitter pflegeheimen krankenhäusern_twitter pflegeheimen | | 473 | viertel gasspeicherkapazitäten europäischen - füllstand gasspeicher deutschland - gasspeicher deutschland - etwa viertel gasspeicherkapazitäten - deutschen gasspeicher | 29 | 473_viertel gasspeicherkapazitäten europäischen_füllstand gasspeicher deutschland_gasspeicher deutschland_etwa viertel gasspeicherkapazitäten | | 474 | hergestellt nattokinase abgebaut - abgebaut nattokinase - abgebaut nattokinase heilnatura - nattokinase enzym japanischen - nattokinase abgebaut | 29 | 474_hergestellt nattokinase abgebaut_abgebaut nattokinase_abgebaut nattokinase heilnatura_nattokinase enzym japanischen | | 475 | emoji profilnamen twitter - beitrag platziert emoji - gerne emoji beitrag - platziert emoji profilnamen - emoji beitrag platziert | 29 | 475_emoji profilnamen twitter_beitrag platziert emoji_gerne emoji beitrag_platziert emoji profilnamen | | 476 | kurkuma bekanntes - kurkuma bekanntes gewürz - entzündungshemmend kurkuma - entgiftend entzündungshemmend kurkuma - kurkuma wirkt erfolgreich | 29 | 476_kurkuma bekanntes_kurkuma bekanntes gewürz_entzündungshemmend kurkuma_entgiftend entzündungshemmend kurkuma | | 477 | 02 schulleiterin commentary - schulleiterin commentary playlist - danke verwendungszweck commentary - eben veröffentlichte zweite - verwendungszweck commentary | 29 | 477_02 schulleiterin commentary_schulleiterin commentary playlist_danke verwendungszweck commentary_eben veröffentlichte zweite | | 478 | deutschland unabhängig kritisch - ende deutschland unabhängig - deutschland unabhängig - deutschland de sendungen - ende deutschland de | 29 | 478_deutschland unabhängig kritisch_ende deutschland unabhängig_deutschland unabhängig_deutschland de sendungen | | 479 | russland österreichische neutralität - gespräch österreichischen irrsinn - fragt warum österreicher - beitritt österreichs gesprochen - gespräch österreichischen | 29 | 479_russland österreichische neutralität_gespräch österreichischen irrsinn_fragt warum österreicher_beitritt österreichs gesprochen | | 480 | verspottet katastrophenschutz nrw - verspottet katastrophenschutz - vorsorge katastrophenfälle nrw - bürger aufgefordert katastrophenfälle - aufgefordert katastrophenfälle | 29 | 480_verspottet katastrophenschutz nrw_verspottet katastrophenschutz_vorsorge katastrophenfälle nrw_bürger aufgefordert katastrophenfälle | | 481 | fixieren akku wanderns - akku langer lebensdauer - akku wanderns geladen - taschenlampe 15 stunden - fixieren akku | 29 | 481_fixieren akku wanderns_akku langer lebensdauer_akku wanderns geladen_taschenlampe 15 stunden | | 482 | europäischen parlaments unternehmen - eu parlament straßburg - parlament straßburg ende - parlament straßburg endgültige - parlament straßburg | 29 | 482_europäischen parlaments unternehmen_eu parlament straßburg_parlament straßburg ende_parlament straßburg endgültige | | 483 | podcast besuche telegram - punkte podcast besuche - globalen wandels schau - aktuelle presseschau interessierten - 2023 aktuelle presseschau | 29 | 483_podcast besuche telegram_punkte podcast besuche_globalen wandels schau_aktuelle presseschau interessierten | | 484 | finanzminister samt grünen - schwarz grüne regierung - bundesregierung sträflich vernachlässigt - vorsorgepflichten bundesregierung sträflich - paletti abgehobene regierung | 29 | 484_finanzminister samt grünen_schwarz grüne regierung_bundesregierung sträflich vernachlässigt_vorsorgepflichten bundesregierung sträflich | | 485 | italien stille krieg - italien übrigens regionalwahlen - italien wunsch bürger - italien leben - regionalwahlen italien | 29 | 485_italien stille krieg_italien übrigens regionalwahlen_italien wunsch bürger_italien leben | | 486 | wiedereinführung wehrpflicht deutschland - wehrpflicht deutschland sei - wehrpflicht deutschland - deutschen streitkräfte - bundeswehr zurückkommen deutschland | 29 | 486_wiedereinführung wehrpflicht deutschland_wehrpflicht deutschland sei_wehrpflicht deutschland_deutschen streitkräfte | | 487 | feldwebel erzählt arrest - erzählt arrest wegen - erzählt arrest - vorgeworfen rahmen - erst entlassen eingesperrt | 29 | 487_feldwebel erzählt arrest_erzählt arrest wegen_erzählt arrest_vorgeworfen rahmen | | 488 | abtreibungsrichtlinien veröffentlicht länder - abtreibungsrichtlinien veröffentlicht - neue abtreibungsrichtlinien veröffentlicht - minnesota erlaubt abtreibungen - erlaubt abtreibungen grund | 29 | 488_abtreibungsrichtlinien veröffentlicht länder_abtreibungsrichtlinien veröffentlicht_neue abtreibungsrichtlinien veröffentlicht_minnesota erlaubt abtreibungen | | 489 | größtmögliche mobilität stabile - versandkostenfrei größtmögliche mobilität - größtmögliche mobilität - vdb größtmögliche mobilität - mobilität stabile lenkrollen | 29 | 489_größtmögliche mobilität stabile_versandkostenfrei größtmögliche mobilität_größtmögliche mobilität_vdb größtmögliche mobilität | | 490 | gesundheit österreich ärzte - initiative gesundheit österreich - einverständniserklärung benötigen bitte - österreich ärzte - bitten einverständnis per | 29 | 490_gesundheit österreich ärzte_initiative gesundheit österreich_einverständniserklärung benötigen bitte_österreich ärzte | | 491 | zündsicherung keramik gasheizer - robuster keramik gasheizofen - keramik gasheizer - katalyt keramikbrenner angenehme - keramik gasheizofen hoher | 29 | 491_zündsicherung keramik gasheizer_robuster keramik gasheizofen_keramik gasheizer_katalyt keramikbrenner angenehme | | 492 | steht frieden - teilnehmern frieden freiheit - frieden freiheit - teilnehmern frieden - siegen steht frieden | 28 | 492_steht frieden_teilnehmern frieden freiheit_frieden freiheit_teilnehmern frieden | | 493 | qualität neben camping - blackout geeignet besonders - hervorragend vorsorge blackout - camping jeglichen outdoor - vorsorge blackout geeignet | 28 | 493_qualität neben camping_blackout geeignet besonders_hervorragend vorsorge blackout_camping jeglichen outdoor | | 494 | einkaufsstraßen lässt rausausderblase - mainstream brechen berichterstattung - berichterstattung beeinflussen umso - brechen berichterstattung beeinflussen - berichterstattung beeinflussen | 28 | 494_einkaufsstraßen lässt rausausderblase_mainstream brechen berichterstattung_berichterstattung beeinflussen umso_brechen berichterstattung beeinflussen | | 495 | wiener psychiaters männlicher - männlicher narzissmus drama - psychiaters männlicher - psychiaters männlicher narzissmus - männlicher narzissmus | 28 | 495_wiener psychiaters männlicher_männlicher narzissmus drama_psychiaters männlicher_psychiaters männlicher narzissmus | | 496 | rom amsterdam - amsterdam - irgendeine austauschbare metropole - milano piazza - mittendrin ordensburg | 28 | 496_rom amsterdam_amsterdam_irgendeine austauschbare metropole_milano piazza | | 497 | tagesschau warnt blackouts - ausfallen blackout wäre - ausfallenein blackout wäre - blackout wäre worst - stromausfall blackout droht | 28 | 497_tagesschau warnt blackouts_ausfallen blackout wäre_ausfallenein blackout wäre_blackout wäre worst | | 498 | ungeimpften geimpften impfpflicht - wien millionen ungeimpften - millionen ungeimpften geimpften - sollen landen bundeskanzleramt - millionen ungeimpften | 28 | 498_ungeimpften geimpften impfpflicht_wien millionen ungeimpften_millionen ungeimpften geimpften_sollen landen bundeskanzleramt | | 499 | supermärkte nehmen russland - nehmen russland hergestellte - netto boykott russischer - russischer verkündet ziehen - boykott russischer | 28 | 499_supermärkte nehmen russland_nehmen russland hergestellte_netto boykott russischer_russischer verkündet ziehen | | 500 | bekanntlich dürfen deutschland - medien begann guerot - euroraum tendenz eigenen - guerot einschätzung bankenkrise - finanzaktivitäten rechtsextremisten verstärkt | 28 | 500_bekanntlich dürfen deutschland_medien begann guerot_euroraum tendenz eigenen_guerot einschätzung bankenkrise | | 501 | entweder arzt vertritt - politik ärztekammer komplementärmedizin - darstellt entweder arzt - arzt vertritt regierungslinie - arzt vertritt | 28 | 501_entweder arzt vertritt_politik ärztekammer komplementärmedizin_darstellt entweder arzt_arzt vertritt regierungslinie | | 502 | ukrainischen luftstreitkräfte - ukrainische militäreinrichtungen getroffen - hubschrauber 160 unbemannte - militärische spezialfahrzeuge zerstört - ukrainische militäreinrichtungen | 28 | 502_ukrainischen luftstreitkräfte_ukrainische militäreinrichtungen getroffen_hubschrauber 160 unbemannte_militärische spezialfahrzeuge zerstört | | 503 | cacoa super food - premium cbd cacoa - weight healthy - cbd cacoa super - cacoa | 28 | 503_cacoa super food_premium cbd cacoa_weight healthy_cbd cacoa super | | 504 | verschuldeten chinesischen immobilienriesen - verschuldeten chinesischen - chinas zusammenbruch weltmarkt - chinesischen immobilienriesen evergrande - vorbild china ernst | 28 | 504_verschuldeten chinesischen immobilienriesen_verschuldeten chinesischen_chinas zusammenbruch weltmarkt_chinesischen immobilienriesen evergrande | | 505 | genommen kriegseintritt irans - kriegseintritt irans - kriegseintritt irans sprechen - drohung iran - iran abgefeuert | 28 | 505_genommen kriegseintritt irans_kriegseintritt irans_kriegseintritt irans sprechen_drohung iran | | 506 | zurück bio vollmilchpulver - vollmilchpulver dose grundnahrungsmittel - trocknungsprozess verliert milch - bio vollmilchpulver dose - milch gehört grundnahrungsmitteln | 28 | 506_zurück bio vollmilchpulver_vollmilchpulver dose grundnahrungsmittel_trocknungsprozess verliert milch_bio vollmilchpulver dose | | 507 | proteste österreich blockieren - corona proteste österreich - proteste österreich - österreich blockieren attackieren - antifaschisten erfolglos mehrheitlich | 28 | 507_proteste österreich blockieren_corona proteste österreich_proteste österreich_österreich blockieren attackieren | | 508 | marmeladen fruchtkompotten fleischgerichten - marmeladen fruchtkompotten - kürbis chili knoblauch - pilzen suppen fertiggerichten - spitzkohl kürbis chili | 28 | 508_marmeladen fruchtkompotten fleischgerichten_marmeladen fruchtkompotten_kürbis chili knoblauch_pilzen suppen fertiggerichten | | 509 | blackouts erforderlich pc - blackout vorsorge zusammengestellt - blackouts erforderlich - produkte blackout vorsorge - blackout vorsorge | 28 | 509_blackouts erforderlich pc_blackout vorsorge zusammengestellt_blackouts erforderlich_produkte blackout vorsorge | | 510 | regel bremst weihnachtsgeschäft - weihnachtsgeschäft bereits abgeschrieben - weihnachtsmärkte - viele hätten weihnachtsgeschäft - shopping berlin wirklich | 28 | 510_regel bremst weihnachtsgeschäft_weihnachtsgeschäft bereits abgeschrieben_weihnachtsmärkte_viele hätten weihnachtsgeschäft | | 511 | überalldeutschlandweit ch freiepressesauerland - überalldeutschlandweit ch - überalldeutschlandweit - spaziergängern überalldeutschlandweit ch - sächsischen spaziergängern überalldeutschlandweit | 28 | 511_überalldeutschlandweit ch freiepressesauerland_überalldeutschlandweit ch_überalldeutschlandweit_spaziergängern überalldeutschlandweit ch | | 512 | ne südafrikas - südafrika - ne südafrikas vier - südafrikas - ende gesund südafrika | 28 | 512_ne südafrikas_südafrika_ne südafrikas vier_südafrikas | | 513 | getreidetonnen trockene lebensmittel - trockene lebensmittel sicher - trockener lebensmittel weizen - mengen trockener lebensmittel - lebensmittel sicher aufbewahrt | 28 | 513_getreidetonnen trockene lebensmittel_trockene lebensmittel sicher_trockener lebensmittel weizen_mengen trockener lebensmittel | | 514 | universalradio vielseitigkeit schnell - universalradio vielseitigkeit - weltempfänger kompaktes universalradio - radio taschenlampe einsetzbar - kompaktes universalradio vielseitigkeit | 28 | 514_universalradio vielseitigkeit schnell_universalradio vielseitigkeit_weltempfänger kompaktes universalradio_radio taschenlampe einsetzbar | | 515 | notieren inhalt jeweiligen - eingelagerte getreide erntejahr - oberfläche handelsüblichen stift - inhalt jeweiligen vorratstonne - außen notieren | 28 | 515_notieren inhalt jeweiligen_eingelagerte getreide erntejahr_oberfläche handelsüblichen stift_inhalt jeweiligen vorratstonne | | 516 | europäisches vermögensregister leben - machbarkeitsstudie europäisches vermögensregister - europäisches vermögensregister hinblick - mehr leisten europäische - leisten europäische union | 28 | 516_europäisches vermögensregister leben_machbarkeitsstudie europäisches vermögensregister_europäisches vermögensregister hinblick_mehr leisten europäische | | 517 | live webinar endlich - live webinar - live webinar eingeladen - kostenlosen live webinar - live webinar findet | 28 | 517_live webinar endlich_live webinar_live webinar eingeladen_kostenlosen live webinar | | 518 | altersstruktur deutschen bevölkerung - deutschen bevölkerung zunehmend - angaben deutschen rentenversicherung - renteneintrittsalter - deutsche rentensystem | 28 | 518_altersstruktur deutschen bevölkerung_deutschen bevölkerung zunehmend_angaben deutschen rentenversicherung_renteneintrittsalter | | 519 | supermärkte drogerien zugangsbeschränkung - ungeimpften mv betrieben - unwissenschaftliche benachteiligung - nonfood handel 2g - umsatz händler verärgert | 28 | 519_supermärkte drogerien zugangsbeschränkung_ungeimpften mv betrieben_unwissenschaftliche benachteiligung_nonfood handel 2g | | 520 | antworten neues bundespressekonferenz - brief kärntner landessprechers - journalisten boris reitschuster - landessprechers mag - statt antworten neue | 28 | 520_antworten neues bundespressekonferenz_brief kärntner landessprechers_journalisten boris reitschuster_landessprechers mag | | 521 | nachhaltig krankheitserreger wasser - krankheitserreger wasser bakterien - wasser bakterien beispiel - krankheitserreger wasser - wasser bakterien | 28 | 521_nachhaltig krankheitserreger wasser_krankheitserreger wasser bakterien_wasser bakterien beispiel_krankheitserreger wasser | | 522 | ärztekammer verfolgt mediziner - impfpflicht gekündigt - ärztekammer verfolgt - zuvor warnte ärztekammer - brief ärztekammerpräsidenten | 27 | 522_ärztekammer verfolgt mediziner_impfpflicht gekündigt_ärztekammer verfolgt_zuvor warnte ärztekammer | | 523 | waffenlieferungsgelder ukraine - eu waffenlieferungsgelder ukraine - militärhilfen ukraine verständigt - militärhilfen ukraine - militärhilfe ukraine 500 | 27 | 523_waffenlieferungsgelder ukraine_eu waffenlieferungsgelder ukraine_militärhilfen ukraine verständigt_militärhilfen ukraine | | 524 | michael brunner mfg - zimmermann wissenschaftsforscher peter - manfred scheingast - mfg manfred scheingast - zimmermann wissenschaftsforscher | 27 | 524_michael brunner mfg_zimmermann wissenschaftsforscher peter_manfred scheingast_mfg manfred scheingast | | 525 | outoftheboxmediatv mittas iban - iban at29 - verbreitet mittas iban - bic swift bawaatwwxxx - iban | 27 | 525_outoftheboxmediatv mittas iban_iban at29_verbreitet mittas iban_bic swift bawaatwwxxx | | 526 | pakistanindien pakistan atommächte - rakete versehentlich pakistan - rakete pakistanindien pakistan - rakete pakistanindien - rakete benachbarte pakistan | 27 | 526_pakistanindien pakistan atommächte_rakete versehentlich pakistan_rakete pakistanindien pakistan_rakete pakistanindien | | 527 | fersenbereich stahlkappe eva - dämpfung grip untergrund - verstärkter fersenbereich stahlkappe - hochwertige außensohle sorgen - innensohle hochwertige außensohle | 27 | 527_fersenbereich stahlkappe eva_dämpfung grip untergrund_verstärkter fersenbereich stahlkappe_hochwertige außensohle sorgen | | 528 | kg fertig gepackt - rund 18 kg - 18 kg fertig - 18 kg - gleichwertigen artikel ersetzt | 27 | 528_kg fertig gepackt_rund 18 kg_18 kg fertig_18 kg | | 529 | deutscher krankenkassen wesentlich - deutsche gesundheitssystem - pfleger deutschland ungeimpft - abrechnungsdaten deutscher krankenkassen - deutscher krankenkassen | 27 | 529_deutscher krankenkassen wesentlich_deutsche gesundheitssystem_pfleger deutschland ungeimpft_abrechnungsdaten deutscher krankenkassen | | 530 | lassen konnte teichtmeister - vielleicht nimmt rest - rest versager gleich - lassen konnte - wäre fehler vielleicht | 27 | 530_lassen konnte teichtmeister_vielleicht nimmt rest_rest versager gleich_lassen konnte | | 531 | grundversorgung deutschland droht - deutschland grundversorgung sichern - steht grundversorgung deutschland - betriebe deutschland grundversorgung - risiken kritische infrastruktur | 27 | 531_grundversorgung deutschland droht_deutschland grundversorgung sichern_steht grundversorgung deutschland_betriebe deutschland grundversorgung | | 532 | twitterusa english gettr - möchtet anbei kontoverbindung - kontoverbindung hamburger sparkassede88 - youtubebackup twitter - youtubebackup twitter twitterusa | 27 | 532_twitterusa english gettr_möchtet anbei kontoverbindung_kontoverbindung hamburger sparkassede88_youtubebackup twitter | | 533 | krise jahre 2020 - auftretende krise vorwand - verursachte krise jahre - finanzkrise - auftretende krise | 27 | 533_krise jahre 2020_auftretende krise vorwand_verursachte krise jahre_finanzkrise | | 534 | polizist attackiert älteren - landau polizist attackiert - polizei reißt alten - attackiert älteren mann - offenbar polizeigewalt | 27 | 534_polizist attackiert älteren_landau polizist attackiert_polizei reißt alten_attackiert älteren mann | | 535 | servus tv - servustv talk - erstausstrahlung österreich - servustv dokumentation interviews - erstausstrahlung österreich deutschland | 27 | 535_servus tv_servustv talk_erstausstrahlung österreich_servustv dokumentation interviews | | 536 | raketenöfen effizienz schwersten - bauweise raketenöfen effizienz - raketenöfen effizienz - raketenofen outdoorküche sinnvoll - immer bewährt raketenofen | 27 | 536_raketenöfen effizienz schwersten_bauweise raketenöfen effizienz_raketenöfen effizienz_raketenofen outdoorküche sinnvoll | | 537 | iban at471400010010213369 schenkungen - at471400010010213369 schenkungen - paypal spendenkonto iban - gibaatwwxxx melden paypal - at471400010010213369 schenkungen manuel | 27 | 537_iban at471400010010213369 schenkungen_at471400010010213369 schenkungen_paypal spendenkonto iban_gibaatwwxxx melden paypal | | 538 | warum impfpflicht freien - impfpflicht verstößt recht - impfpflicht zulässig sei - allgemeine impfpflicht zulässig - impfpflicht zulässig | 27 | 538_warum impfpflicht freien_impfpflicht verstößt recht_impfpflicht zulässig sei_allgemeine impfpflicht zulässig | | 539 | petroleumheizung hierfür gute - petroleumheizung hierfür - vorteile petroleumheizung hierfür - folgende vorteile petroleumheizung - petroleumheizung | 27 | 539_petroleumheizung hierfür gute_petroleumheizung hierfür_vorteile petroleumheizung hierfür_folgende vorteile petroleumheizung | | 540 | fünfte asteroid überhaupt - erst fünfte asteroid - komet himmel sehen - asteroid überhaupt - komet himmel | 27 | 540_fünfte asteroid überhaupt_erst fünfte asteroid_komet himmel sehen_asteroid überhaupt | | 541 | patriotische aktivisten heuchlerischen - aktivisten heuchlerischen - aktivisten heuchlerischen teilnehmer - lesen systemkritisches banner - anlässlich scheinheiligen solidaritätskundgebung | 27 | 541_patriotische aktivisten heuchlerischen_aktivisten heuchlerischen_aktivisten heuchlerischen teilnehmer_lesen systemkritisches banner | | 542 | vorbezeichneten tagen ebenfalls - vorbezeichneten tagen - mannheim vorbezeichneten tagen - ersatzversammlung stadtgebiet - ebenfalls ganztätig verboten | 27 | 542_vorbezeichneten tagen ebenfalls_vorbezeichneten tagen_mannheim vorbezeichneten tagen_ersatzversammlung stadtgebiet | | 543 | platzbedarf einfache lagerung - einfache lagerung schnelle - geringe platzbedarf einfache - dabei geringe platzbedarf - schnelle zubereitung minimalen | 27 | 543_platzbedarf einfache lagerung_einfache lagerung schnelle_geringe platzbedarf einfache_dabei geringe platzbedarf | | 544 | stundenlang angenehme wärme - kälte wohlige wärme - wohlige wärme - elektrizität sorgt heizung - wärme falle stromausfalls | 27 | 544_stundenlang angenehme wärme_kälte wohlige wärme_wohlige wärme_elektrizität sorgt heizung | | 545 | 2023 frankfurt 06 - frankfurt 06 02 - 02 2023 frankfurt - 2023 frankfurt 13 - düsseldorf 11 02 | 27 | 545_2023 frankfurt 06_frankfurt 06 02_02 2023 frankfurt_2023 frankfurt 13 | | 546 | gestorben teilnahme zahlreich - gestorben teilnahme - teilnahme zahlreich reichte - verstorbener gedenken ganz - belebten orten abgehalten | 27 | 546_gestorben teilnahme zahlreich_gestorben teilnahme_teilnahme zahlreich reichte_verstorbener gedenken ganz | | 547 | rassismus menschen gemäss - persönlich rassistischer - rassismus menschen - halte persönlich rassistischer - persönlich rassistischer weißen | 27 | 547_rassismus menschen gemäss_persönlich rassistischer_rassismus menschen_halte persönlich rassistischer | | 548 | gekündigt gehalt arbeitslosengeld - februar arbeitgeber gekündigt - arbeitgeber gekündigt - arbeitgeber gekündigt gehalt - arbeitslosengeld gestrichen dafür | 27 | 548_gekündigt gehalt arbeitslosengeld_februar arbeitgeber gekündigt_arbeitgeber gekündigt_arbeitgeber gekündigt gehalt | | 549 | führte time magazine - narrative 73 tom - time magazine leyen - erschienen hochbrisanten artikel - geschichte herman | 27 | 549_führte time magazine_narrative 73 tom_time magazine leyen_erschienen hochbrisanten artikel | | 550 | massiven sanktionen russland - überzeugen russland schuld - beschlagnahmt russische staat - beschlagnahmt russische - sanktionen russland bereits | 27 | 550_massiven sanktionen russland_überzeugen russland schuld_beschlagnahmt russische staat_beschlagnahmt russische | | 551 | salzkristall leuchte erwärmt - salzkristall leuchte wohltuende - wohltuende licht salzkristall - licht salzkristall leuchte - licht salzkristall | 27 | 551_salzkristall leuchte erwärmt_salzkristall leuchte wohltuende_wohltuende licht salzkristall_licht salzkristall leuchte | | 552 | geschäfte westukrainischen nationalistischen - westukrainischen nationalistischen nazi - westukrainischen nationalistischen - krieg ukraine herbeiführen - lassen ukrainische volk | 27 | 552_geschäfte westukrainischen nationalistischen_westukrainischen nationalistischen nazi_westukrainischen nationalistischen_krieg ukraine herbeiführen | | 553 | video music youtube - youtube free music - video contains copyrighted - music youtube - music youtube free | 26 | 553_video music youtube_youtube free music_video contains copyrighted_music youtube | | 554 | verwenden gruppe hören - funkgeräte pro gruppe - kanal gleiche verschlüsselung - solange gleichen kanal - gruppe hören miteinander | 26 | 554_verwenden gruppe hören_funkgeräte pro gruppe_kanal gleiche verschlüsselung_solange gleichen kanal | | 555 | solarpanel taschenlampe kabellos - enthalten solar powerbank - integriertes solarpanel taschenlampe - integriertes solarpanel - solar powerbank 20 | 26 | 555_solarpanel taschenlampe kabellos_enthalten solar powerbank_integriertes solarpanel taschenlampe_integriertes solarpanel | | 556 | raketenöfen effizienz schwersten - ersetzen raketenofen stark - bauweise raketenöfen effizienz - herd ersetzen raketenofen - raketenofen stark reduziert | 26 | 556_raketenöfen effizienz schwersten_ersetzen raketenofen stark_bauweise raketenöfen effizienz_herd ersetzen raketenofen | | 557 | gestern friedlichen - gestern friedlichen lichter - leben gekommen möge - menschen frieden aufklärung - läuft frieden | 26 | 557_gestern friedlichen_gestern friedlichen lichter_leben gekommen möge_menschen frieden aufklärung | | 558 | münchen2212 schellingstrasse - uni münchen2212 schellingstrasse - schiessstätte friedhof berlin - münchen2212 - uni münchen2212 | 26 | 558_münchen2212 schellingstrasse_uni münchen2212 schellingstrasse_schiessstätte friedhof berlin_münchen2212 | | 559 | tschentscher müssten kontaktbeschränkungen - kontaktbeschränkungen bedarf bundesländern - müssten kontaktbeschränkungen geimpfte - müssten kontaktbeschränkungen - kontaktbeschränkungen geimpfte entscheiden | 26 | 559_tschentscher müssten kontaktbeschränkungen_kontaktbeschränkungen bedarf bundesländern_müssten kontaktbeschränkungen geimpfte_müssten kontaktbeschränkungen | | 560 | sportpferdezucht kornelia markel - hunderten pferden - soo stolz baby - stolz baby - soo stolz | 26 | 560_sportpferdezucht kornelia markel_hunderten pferden_soo stolz baby_stolz baby | | 561 | lebensmittelproduktion europa turbulenzen - krieg ukraine lebensmittelproduktion - ukraine lebensmittelproduktion europa - frankreich eu krisenmechanismus - lebensmittelproduktion europa | 26 | 561_lebensmittelproduktion europa turbulenzen_krieg ukraine lebensmittelproduktion_ukraine lebensmittelproduktion europa_frankreich eu krisenmechanismus | | 562 | google wegen sperrung - verfügung google wegen - verfügung google - world alternative media - google erwirkt | 26 | 562_google wegen sperrung_verfügung google wegen_verfügung google_world alternative media | | 563 | ministerin handlungsunfähig amt - ministerin handlungsunfähig - justizministerin bald - glaube minister - minister stets unbeholfene | 26 | 563_ministerin handlungsunfähig amt_ministerin handlungsunfähig_justizministerin bald_glaube minister | | 564 | gab nemos news - news 100 listener - nemos news 100 - cycle fake news - newsletter follow us | 26 | 564_gab nemos news_news 100 listener_nemos news 100_cycle fake news | | 565 | mitdenkenfolge bekannte werbespruch - nachdenken mitdenkenfolge bekannte - mitdenken folge verstanden - mitdenkenfolge bekannte - mitdenken folge vordenken | 26 | 565_mitdenkenfolge bekannte werbespruch_nachdenken mitdenkenfolge bekannte_mitdenken folge verstanden_mitdenkenfolge bekannte | | 566 | deutschen mehr demonstrieren - ausbreitenden demonstrationen deren - rasant ausbreitenden demonstrationen - gab aufrufe demonstrationen - ausbreitenden demonstrationen | 26 | 566_deutschen mehr demonstrieren_ausbreitenden demonstrationen deren_rasant ausbreitenden demonstrationen_gab aufrufe demonstrationen | | 567 | wundermittel nattokinase potentielle - nattokinase eingesetzt - killer wundermittel nattokinase - wundermittel nattokinase - nattokinase potentielle impfopfer | 26 | 567_wundermittel nattokinase potentielle_nattokinase eingesetzt_killer wundermittel nattokinase_wundermittel nattokinase | | 568 | original storm kettle - storm kettle - storm kettle kommt - kettle edelstahl sturmkanne - original kelly kettle | 26 | 568_original storm kettle_storm kettle_storm kettle kommt_kettle edelstahl sturmkanne | | 569 | benötigt wasserbeutel fast - benötigt wasserbeutel - täglichen wasserverbrauch litern - trinkwasser abfüllen lagern - wasserverbrauch litern | 26 | 569_benötigt wasserbeutel fast_benötigt wasserbeutel_täglichen wasserverbrauch litern_trinkwasser abfüllen lagern | | 570 | powerstation generieren eigenen - powerstation eignet bestens - powerstation generieren - powerstation eignet - dynamo powerstation generieren | 26 | 570_powerstation generieren eigenen_powerstation eignet bestens_powerstation generieren_powerstation eignet | | 571 | handelt behandelt gesundheit - gesundheit tun - behandelt gesundheit - gesundheit höchste zeit - gesundheit höchste | 26 | 571_handelt behandelt gesundheit_gesundheit tun_behandelt gesundheit_gesundheit höchste zeit | | 572 | impfung dänemark 83 - genauso hoch impfquote - hoch impfquote - impfquote land angesichts - impfung dänemark | 26 | 572_impfung dänemark 83_genauso hoch impfquote_hoch impfquote_impfquote land angesichts | | 573 | served world wars - world wars - world wars ww - justiz hals solidaritätmitkritischenärzten - world police officer | 26 | 573_served world wars_world wars_world wars ww_justiz hals solidaritätmitkritischenärzten | | 574 | partie kriminellen lassen - partie kriminellen - mehr verbrecherisch ekelt - mehr verbrecherisch - verbrechern | 26 | 574_partie kriminellen lassen_partie kriminellen_mehr verbrecherisch ekelt_mehr verbrecherisch | | 575 | maskenpflicht großen europäischen - europäischen airports quellen - maskenpflicht fällt flughäfen - maske bussen bahnen - deutschalnd maskenpflicht öffentlichen | 26 | 575_maskenpflicht großen europäischen_europäischen airports quellen_maskenpflicht fällt flughäfen_maske bussen bahnen | | 576 | tötete ibrahim terroristischen - ibrahim terroristischen - menschen tötete - ibrahim terroristischen motiven - tötete ibrahim | 26 | 576_tötete ibrahim terroristischen_ibrahim terroristischen_menschen tötete_ibrahim terroristischen motiven | | 577 | unterdrückt freiheit positive - positiven freiheit unterdrückt - freiheit positive negative - freiheit positive - positiven freiheit | 26 | 577_unterdrückt freiheit positive_positiven freiheit unterdrückt_freiheit positive negative_freiheit positive | | 578 | hellgrünen triebspitzen schmecken - grüne heucheln besonders - gepflückt jungen hellgrünen - brief widerständigen grünen - widerständigen grünen | 26 | 578_hellgrünen triebspitzen schmecken_grüne heucheln besonders_gepflückt jungen hellgrünen_brief widerständigen grünen | | 579 | steht ukraine krise - ukraine krise merkt - bald ähnlicher katastrophenfall - gerufen krise größer - rnd erklärt krisen | 26 | 579_steht ukraine krise_ukraine krise merkt_bald ähnlicher katastrophenfall_gerufen krise größer | | 580 | verteuert idiotischen klima - klima auflagen abgaben - idiotischen klima auflagen - konkrete auswirkung klimawahns - schützen österreicher teuerungslawine | 26 | 580_verteuert idiotischen klima_klima auflagen abgaben_idiotischen klima auflagen_konkrete auswirkung klimawahns | | 581 | 30 heldenplatz w2612 - heldenplatz w2612 - amsterdam 05 02 - heldenplatz w2612 wi2612 - luxemburg regensburg 18 | 26 | 581_30 heldenplatz w2612_heldenplatz w2612_amsterdam 05 02_heldenplatz w2612 wi2612 | | 582 | rund 700 bundeswehrsoldaten - rund 000 soldaten - 700 bundeswehrsoldaten - bundeswehr erhöht kontingent - 000 soldaten soldatinnen | 26 | 582_rund 700 bundeswehrsoldaten_rund 000 soldaten_700 bundeswehrsoldaten_bundeswehr erhöht kontingent | | 583 | funktioniert stromausfall blackout - stabo fc 850 - schwimmfähiges allwetter pmr - fc 850 handelt - beim fc 850 | 26 | 583_funktioniert stromausfall blackout_stabo fc 850_schwimmfähiges allwetter pmr_fc 850 handelt | | 584 | talk aktuelles australien - australien aktuelles - bernie australien aktuelles - australien aktuelles australien - aktuelles australien | 25 | 584_talk aktuelles australien_australien aktuelles_bernie australien aktuelles_australien aktuelles australien | | 585 | virus gefährlich politik - virus gefahr zurück - demonstrationen unterschieden virus - pandemie beendet wäre - sei virus absichtlich | 25 | 585_virus gefährlich politik_virus gefahr zurück_demonstrationen unterschieden virus_pandemie beendet wäre | | 586 | europa illegalen muslimischen - illegalen muslimischen - tobi bösen rechten - rechtsextremismus ministeriums - rechtsextremismus ministeriums hervor | 25 | 586_europa illegalen muslimischen_illegalen muslimischen_tobi bösen rechten_rechtsextremismus ministeriums | | 587 | mehr öffentlichen stellen - öffentlichen stellen läuft - regional tv plätzen - landstraße läuft digitalen - läuft digitalen plakatwänden | 25 | 587_mehr öffentlichen stellen_öffentlichen stellen läuft_regional tv plätzen_landstraße läuft digitalen | | 588 | diskutieren ___________ audioanalysen - impfzwang audioanalyse bereits - dabei ___________ audioanalysen - gehen erkläre audioanalyse - ___________ audioanalysen spreaker | 25 | 588_diskutieren ___________ audioanalysen_impfzwang audioanalyse bereits_dabei ___________ audioanalysen_gehen erkläre audioanalyse | | 589 | eins klimareligion gestern - klimareligion gestern mühe - klimareligion gestern - klimaterroristen wohl - klimareligion heuchelei | 25 | 589_eins klimareligion gestern_klimareligion gestern mühe_klimareligion gestern_klimaterroristen wohl | | 590 | abhängigkeit russischem gas - teure abhängigkeit russischem - russischem gas abhängig - weltkrieg verhindern russland - trump deutschland davor | 25 | 590_abhängigkeit russischem gas_teure abhängigkeit russischem_russischem gas abhängig_weltkrieg verhindern russland | | 591 | panikmache lüge narrativ - derzeit verschärfte rhetorik - rhetorik finde ungeschickt - verschärfte rhetorik finde - sagen narrativ zusammenbricht | 25 | 591_panikmache lüge narrativ_derzeit verschärfte rhetorik_rhetorik finde ungeschickt_verschärfte rhetorik finde | | 592 | wärme zuverlässig schlafsackinneren - robuster outdoor schlafsack - verfügt wärmekragen schnürzug - schlafsack extrem kalte - integralkapuze verfügt wärmekragen | 25 | 592_wärme zuverlässig schlafsackinneren_robuster outdoor schlafsack_verfügt wärmekragen schnürzug_schlafsack extrem kalte | | 593 | sanktionstrick usa - sanktion vernichtungswaffe liberalen - staaten dank sanktionen - sanktionstrick usa vereinigten - dank sanktionen | 25 | 593_sanktionstrick usa_sanktion vernichtungswaffe liberalen_staaten dank sanktionen_sanktionstrick usa vereinigten | | 594 | warum rpp präsentiert - moderne gewissenserforschung rpp - rpp - rpp präsentiert - diskussion gegnern befürwortern | 25 | 594_warum rpp präsentiert_moderne gewissenserforschung rpp_rpp_rpp präsentiert | | 595 | impfpflicht befürwortern scheint - vielen impfpflicht befürwortern - obwohl einrichtungsbezogenen impfpflicht - impfpflicht befürwortern - befürworter allgemeinen impfpflicht | 25 | 595_impfpflicht befürwortern scheint_vielen impfpflicht befürwortern_obwohl einrichtungsbezogenen impfpflicht_impfpflicht befürwortern | | 596 | donald blair narrative - frage tue übertreten - grenze erreicht frage - donald blair commentary - ziehe grenze frage | 25 | 596_donald blair narrative_frage tue übertreten_grenze erreicht frage_donald blair commentary | | 597 | kekulés kommentar politisches - alexander kekulé vorläufige - virologen alexander kekulé - kekulés kommentar - kekulé vorläufige | 25 | 597_kekulés kommentar politisches_alexander kekulé vorläufige_virologen alexander kekulé_kekulés kommentar | | 598 | gab see stew - stew gab see - stew gab - follow stew gab - gab see | 25 | 598_gab see stew_stew gab see_stew gab_follow stew gab | | 599 | betreiben autark lampenöl - petroleumheizung petroleumlampen autark - lampenöl betreiben autark - erhältlich autark lampenöl - autark lampenöl betreiben | 25 | 599_betreiben autark lampenöl_petroleumheizung petroleumlampen autark_lampenöl betreiben autark_erhältlich autark lampenöl | | 600 | via bitcoin bc1qjju0tuv006uhh9m209h5xr5y6qm2rjh54zuhgk - via bitcoin - bitcoin bc1qjju0tuv006uhh9m209h5xr5y6qm2rjh54zuhgk via - bitcoin btc 1afgnbmhxa6cy9ykusxysxvpjpyecpbkrr - 0xee6ed93c3adc474450011e9af22939a0b9b312c7 bitcoin btc | 25 | 600_via bitcoin bc1qjju0tuv006uhh9m209h5xr5y6qm2rjh54zuhgk_via bitcoin_bitcoin bc1qjju0tuv006uhh9m209h5xr5y6qm2rjh54zuhgk via_bitcoin btc 1afgnbmhxa6cy9ykusxysxvpjpyecpbkrr | | 601 | buch empfehlungen - unterstütze gerne - wollt unterstützen - danke paypal - wisekonto coaching buch | 25 | 601_buch empfehlungen_unterstütze gerne_wollt unterstützen_danke paypal | | 602 | spezial feindbild russland - feindbild russland - zeigt putin - russland nato marschiert - feindbild russland nato | 25 | 602_spezial feindbild russland_feindbild russland_zeigt putin_russland nato marschiert | | 603 | demonstrieren aktuell - linz versammlungsleiter - jeweiligen versammlungen bzw - versammlungsleiter eva - jeweiligen versammlungen | 25 | 603_demonstrieren aktuell_linz versammlungsleiter_jeweiligen versammlungen bzw_versammlungsleiter eva | | 604 | natürlich satire natörrlich - nachrichtenbeitragnatürlich satire - reine humorvolle satire - nachrichtenbeitragnatürlich satire verteidigen - natörrlich satire | 25 | 604_natürlich satire natörrlich_nachrichtenbeitragnatürlich satire_reine humorvolle satire_nachrichtenbeitragnatürlich satire verteidigen | | 605 | metal devil cokes - devil cokes - hell metal devil - knecht soros zion - devil | 25 | 605_metal devil cokes_devil cokes_hell metal devil_knecht soros zion | | 606 | wert kaufe edelmetalle - kaufe edelmetalle schon - münzen günstigen preisen - münzen günstigen - euro mehr wert | 25 | 606_wert kaufe edelmetalle_kaufe edelmetalle schon_münzen günstigen preisen_münzen günstigen | | 607 | polnische ministerpräsident - warschau besorgen - warschau - krakau sorgte - wisla krakau sorgte | 25 | 607_polnische ministerpräsident_warschau besorgen_warschau_krakau sorgte | | 608 | joachim müller telegram - telegramkanal mittelerde tv - joachim müller heute - uhr telegramkanal mittelerde - kanal fragen hans | 25 | 608_joachim müller telegram_telegramkanal mittelerde tv_joachim müller heute_uhr telegramkanal mittelerde | | 609 | wwg1wga corona intelligenz - post wwg1wga corona - plan große erwachen - läuft wwg1wga corona - wwg1wga corona | 25 | 609_wwg1wga corona intelligenz_post wwg1wga corona_plan große erwachen_läuft wwg1wga corona | | 610 | vorweihnachtszeit schweden - candle light - candle - längste nacht jahres - vorweihnachtszeit | 25 | 610_vorweihnachtszeit schweden_candle light_candle_längste nacht jahres | | 611 | kapazitäten österreich leistbarem - österreich leistbarem niveau - kapazitäten österreich - österreich leistbarem - österreichische hauptstadt | 25 | 611_kapazitäten österreich leistbarem_österreich leistbarem niveau_kapazitäten österreich_österreich leistbarem | | 612 | leichter sportlicher einsatzstiefel - size springerstiefel unterwegs - bietet squad stiefel - squad stiefel inch - sportlicher einsatzstiefel außergewöhnlich | 24 | 612_leichter sportlicher einsatzstiefel_size springerstiefel unterwegs_bietet squad stiefel_squad stiefel inch | | 613 | logistikkosten ukraine krieg - energiepreise logistikkosten ukraine - ukraine krieg supermarkt - logistikkosten ukraine - krieg ukraine dürfte | 24 | 613_logistikkosten ukraine krieg_energiepreise logistikkosten ukraine_ukraine krieg supermarkt_logistikkosten ukraine | | 614 | mini usb lampe - usb lampe ideal - lampe 190 zentimeter - ideal lampe 190 - lampe ideal reduziert | 24 | 614_mini usb lampe_usb lampe ideal_lampe 190 zentimeter_ideal lampe 190 | | 615 | resistent rost klassiker - leistungsverhältnis laterne galvanisch - laterne galvanisch verzinktem - laterne galvanisch - stabil besonders resistent | 24 | 615_resistent rost klassiker_leistungsverhältnis laterne galvanisch_laterne galvanisch verzinktem_laterne galvanisch | | 616 | ukraine fordert lebensmittelbranche - manifestation leidensenergie deutschem - schlachtbetriebe deutschland billige - leidensenergie deutschem boden - schlachtbetriebe deutschland | 24 | 616_ukraine fordert lebensmittelbranche_manifestation leidensenergie deutschem_schlachtbetriebe deutschland billige_leidensenergie deutschem boden | | 617 | sorge medien - immer mediathek sorge - immer mediathek - mediathek sorge - mediathek sorge medien | 24 | 617_sorge medien_immer mediathek sorge_immer mediathek_mediathek sorge | | 618 | grillen immer ideal - immer ideal camping - kochen grillen - kochen grillen immer - grillen | 24 | 618_grillen immer ideal_immer ideal camping_kochen grillen_kochen grillen immer | | 619 | hassbriefe russen gemeldet - gestrichen vergehen russe - deutschen schreiben russen - schreiben russen - russen deutschland ziel | 24 | 619_hassbriefe russen gemeldet_gestrichen vergehen russe_deutschen schreiben russen_schreiben russen | | 620 | protestbewegung erfolg müssen - daran protestbewegung erfolg - dahin protest städten - protest städten ortschaften - daran protestbewegung | 24 | 620_protestbewegung erfolg müssen_daran protestbewegung erfolg_dahin protest städten_protest städten ortschaften | | 621 | bedeutet deutschland letzte - bundesrepublik deutschland immer - gesetzes bedeutet deutschland - deutschland letzte - deutschland letzte ausfahrt | 24 | 621_bedeutet deutschland letzte_bundesrepublik deutschland immer_gesetzes bedeutet deutschland_deutschland letzte | | 622 | dokumentarfilm 100 ärzte - anzugeben dokumentarfilm 100 - bitte unterstützen filmprojekt - unterstützen filmprojekt bitte - anzugeben dokumentarfilm | 24 | 622_dokumentarfilm 100 ärzte_anzugeben dokumentarfilm 100_bitte unterstützen filmprojekt_unterstützen filmprojekt bitte | | 623 | künstlerin wichtig - genau bekommen plötz - bekommen plötz prinzip - bekommen plötz - unterwürfigkeit bessere begriff | 24 | 623_künstlerin wichtig_genau bekommen plötz_bekommen plötz prinzip_bekommen plötz | | 624 | geldbörsen datenschutz datensicherheit - herren geldbörsen datenschutz - datenschutz datensicherheit erst - datenschutz datensicherheit - datensicherheit erst | 24 | 624_geldbörsen datenschutz datensicherheit_herren geldbörsen datenschutz_datenschutz datensicherheit erst_datenschutz datensicherheit | | 625 | obdachlosen dagegen berlin - obdachlosen berlin brauchen - obdachlose deutschland zufluchtsorten - obdachlosen berlin - obdachlose deutschland | 24 | 625_obdachlosen dagegen berlin_obdachlosen berlin brauchen_obdachlose deutschland zufluchtsorten_obdachlosen berlin | | 626 | korn hefe bio - frisches bio brot - einfach früchten kräutern - buttermilch hinzuzufügen somit - ganz einfach früchten | 24 | 626_korn hefe bio_frisches bio brot_einfach früchten kräutern_buttermilch hinzuzufügen somit | | 627 | geheimdienste virus wussten - regierungsbeamte geheimdienste virus - geheimdienste virus - skepsis republikanern us - virus wussten | 24 | 627_geheimdienste virus wussten_regierungsbeamte geheimdienste virus_geheimdienste virus_skepsis republikanern us | | 628 | einnahmen durchschnittsbürgers diverse - einnahmen durchschnittsbürgers - drittel einnahmen durchschnittsbürgers - prozentual mehr zahlen - durchschnittsbürgers diverse steuern | 24 | 628_einnahmen durchschnittsbürgers diverse_einnahmen durchschnittsbürgers_drittel einnahmen durchschnittsbürgers_prozentual mehr zahlen | | 629 | mobil hyundai stromgenerator - stromgenerator hy4500sei ausgestattet - problemlos betrieben generator - betrieben generator steht - betrieben generator | 24 | 629_mobil hyundai stromgenerator_stromgenerator hy4500sei ausgestattet_problemlos betrieben generator_betrieben generator steht | | 630 | liefern eier mehr - lieferanten ausbruchs vogelgrippe - liefern eier - vogelgrippe ausbruch usa - vogelgrippe ausbruch | 24 | 630_liefern eier mehr_lieferanten ausbruchs vogelgrippe_liefern eier_vogelgrippe ausbruch usa | | 631 | gesorgt protest geplantes - gesorgt protest - beteiligten aktivisten - beteiligten aktivisten fragen - identitäre aktivisten | 24 | 631_gesorgt protest geplantes_gesorgt protest_beteiligten aktivisten_beteiligten aktivisten fragen | | 632 | vpn provider umgehen - vpn tunnel online - vpn provider - vpn anbieter findet - mehr vpn provider | 24 | 632_vpn provider umgehen_vpn tunnel online_vpn provider_vpn anbieter findet | | 633 | produkt schweizer armee - beschaffungsstellen schweizer armee - schweizer armeedas qualitätsbewusstsein - schweizer armee qualitätsbewusstsein - wolldecke schweizer armee | 24 | 633_produkt schweizer armee_beschaffungsstellen schweizer armee_schweizer armeedas qualitätsbewusstsein_schweizer armee qualitätsbewusstsein | | 634 | überschwemmungen australien - aufgrund verheerender überschwemmungen - verheerender überschwemmungen - massiven überschwemmungen - verheerender überschwemmungen häuser | 24 | 634_überschwemmungen australien_aufgrund verheerender überschwemmungen_verheerender überschwemmungen_massiven überschwemmungen | | 635 | genau leben gefährlich - leben gefährlich - lebensmittel heute internet - frisch krise unverzichtbarer - wichtig lebensmittelvorrat anzulegen | 24 | 635_genau leben gefährlich_leben gefährlich_lebensmittel heute internet_frisch krise unverzichtbarer | | 636 | lassen hersteller produzieren - webkante farbstreifen fertigt - farbstreifen fertigt lassen - fertigt lassen hersteller - produzieren originalvorgaben 100 | 24 | 636_lassen hersteller produzieren_webkante farbstreifen fertigt_farbstreifen fertigt lassen_fertigt lassen hersteller | | 637 | migrationshintergrund deutsche - migrationshintergrund deutsche unternehmen - migranten meistens bock - seien frauen migranten - menschen migrationshintergrund einstieg | 24 | 637_migrationshintergrund deutsche_migrationshintergrund deutsche unternehmen_migranten meistens bock_seien frauen migranten | | 638 | mfg landessprecher salzburg - mfg pressekonferenz expertenrunde - oberösterreich ddr - pressekonferenz expertenrunde - mfg oberösterreich ddr | 24 | 638_mfg landessprecher salzburg_mfg pressekonferenz expertenrunde_oberösterreich ddr_pressekonferenz expertenrunde | | 639 | financial system qfs - qfs international monetary - nachlesen qfs quantumfinancialsystem - quantum financial system - quantenfinanzsystem | 24 | 639_financial system qfs_qfs international monetary_nachlesen qfs quantumfinancialsystem_quantum financial system | | 640 | humanus findest gesundheit - wer gesundheit - wer gesundheit staat - naturheilmittel bekannten buchreihe - bestellen wer gesundheit | 24 | 640_humanus findest gesundheit_wer gesundheit_wer gesundheit staat_naturheilmittel bekannten buchreihe | | 641 | angenehme strahlungswärme sauerstoffmangelsicherung - thermoelektrische zündsicherung sicheren - strahlungswärme sauerstoffmangelsicherung - sowie thermoelektrische zündsicherung - thermoelektrische zündsicherung | 23 | 641_angenehme strahlungswärme sauerstoffmangelsicherung_thermoelektrische zündsicherung sicheren_strahlungswärme sauerstoffmangelsicherung_sowie thermoelektrische zündsicherung | | 642 | grüßen liebe österreicher - herzliche grüße österreich - austria freundlichen grüßen - liebe grüße niederösterreich - grüßen coy austria | 23 | 642_grüßen liebe österreicher_herzliche grüße österreich_austria freundlichen grüßen_liebe grüße niederösterreich | | 643 | proteste vielen orten - proteste vielen - corona proteste vielen - bereit demonstranten - stehen bereit demonstranten | 23 | 643_proteste vielen orten_proteste vielen_corona proteste vielen_bereit demonstranten | | 644 | geigerzähler aufspüren radioaktiver - radioaktiver strahlung geigerzähler - aufspüren radioaktiver - aufspüren radioaktiver strahlung - zuverlässig erhöhung radioaktiver | 23 | 644_geigerzähler aufspüren radioaktiver_radioaktiver strahlung geigerzähler_aufspüren radioaktiver_aufspüren radioaktiver strahlung | | 645 | edith brötzner felix - mal felix baumgartner - rösch hoch roß - felix baumgartner fragen - felix baumgartner | 23 | 645_edith brötzner felix_mal felix baumgartner_rösch hoch roß_felix baumgartner fragen | | 646 | dankeschön abonniere telegram - quelle abonniere telegram - bild abonniere telegram - abonniere telegram kanal - abonniere telegram | 23 | 646_dankeschön abonniere telegram_quelle abonniere telegram_bild abonniere telegram_abonniere telegram kanal | | 647 | geschwurbels kumm twitter - komm bubble abonniere - oft beifall medialer - twitter leiberl zeug - german neue normalität | 23 | 647_geschwurbels kumm twitter_komm bubble abonniere_oft beifall medialer_twitter leiberl zeug | | 648 | video wütenden krankenschwester - hören krankenschwester - krankenschwester hören - krankenschwester hören krankenschwester - krankenschwester krankenschwester | 23 | 648_video wütenden krankenschwester_hören krankenschwester_krankenschwester hören_krankenschwester hören krankenschwester | | 649 | attacke darunter mord - meldete polizei deutschland - polizei deutschland mindestens - polizei deutschland - totschlag raub bedrohung | 23 | 649_attacke darunter mord_meldete polizei deutschland_polizei deutschland mindestens_polizei deutschland | | 650 | schochwitz deutschland - 2021 schochwitz deutschland - ernst wolff journalist - journalist bestsellerautor ernst - november 2021 schochwitz | 23 | 650_schochwitz deutschland_2021 schochwitz deutschland_ernst wolff journalist_journalist bestsellerautor ernst | | 651 | warnstreik österreich - warnstreik österreich schafft - generalstreik auszurufen nimmt - allgemeinen generalstreik auszurufen - generalstreik auszurufen | 23 | 651_warnstreik österreich_warnstreik österreich schafft_generalstreik auszurufen nimmt_allgemeinen generalstreik auszurufen | | 652 | beim kriegsgeschehen ukraine - bundeswehr inneren impfpflicht - impfpflicht naht schatten - impfpflicht sorge versucht - kriegsgeschehen ukraine | 23 | 652_beim kriegsgeschehen ukraine_bundeswehr inneren impfpflicht_impfpflicht naht schatten_impfpflicht sorge versucht | | 653 | bundestagsabgeordnete anti impfpflicht - bundestagswahlkampf verhinderung impfpflicht - antragsentwurf allgemeine impfpflicht - grundrechtseingriffs impfpflicht - debatte impfpflicht coronavirus | 23 | 653_bundestagsabgeordnete anti impfpflicht_bundestagswahlkampf verhinderung impfpflicht_antragsentwurf allgemeine impfpflicht_grundrechtseingriffs impfpflicht | | 654 | ukraine wurde laptop - verlassenen militärbase ukraine - amerikaner daten ukrainische - daten amerikanischer aufklärungsflüge - daten ukrainische armee | 23 | 654_ukraine wurde laptop_verlassenen militärbase ukraine_amerikaner daten ukrainische_daten amerikanischer aufklärungsflüge | | 655 | telegram app stores - angeschmiert app telegram - telegram app - app telegram - apple google telegram | 23 | 655_telegram app stores_angeschmiert app telegram_telegram app_app telegram | | 656 | globalistische gremium harvard - darauf globalistische gremium - weltwirtschaftsforum einfach - globalistische gremium - weist darauf globalistische | 23 | 656_globalistische gremium harvard_darauf globalistische gremium_weltwirtschaftsforum einfach_globalistische gremium | | 657 | russisches militärflugzeug ostsee - jahren russland atomwaffen - russisches militärflugzeug - russland atomwaffen - russland atomwaffen bewaffnete | 23 | 657_russisches militärflugzeug ostsee_jahren russland atomwaffen_russisches militärflugzeug_russland atomwaffen | | 658 | eingesetzt elektrische geräte - elektrische geräte - eingesetzt elektrische - elektrische geräte leuchten - elektrische | 23 | 658_eingesetzt elektrische geräte_elektrische geräte_eingesetzt elektrische_elektrische geräte leuchten | | 659 | schon corona krise - corona krise weit - seilschaften corona krise - corona krise lang - corona krise | 23 | 659_schon corona krise_corona krise weit_seilschaften corona krise_corona krise lang | | 660 | sagte söder montag - bayerischen kabinetts münchen - unverständnis aussagen früheren - kubicki sagt geht - kubicki sagt | 23 | 660_sagte söder montag_bayerischen kabinetts münchen_unverständnis aussagen früheren_kubicki sagt geht | | 661 | stuttgart flüchtlinge ukraine - flüchtlinge ukraine dürfen - nachweis hotels übernachten - hotels übernachten müssen - beisst stuttgart flüchtlinge | 23 | 661_stuttgart flüchtlinge ukraine_flüchtlinge ukraine dürfen_nachweis hotels übernachten_hotels übernachten müssen | | 662 | telegram gesperrt regime - inzwischen telegram gesperrt - telegram gesperrt - lange zensurfrei deutschen - zensurfrei deutschen behörden | 23 | 662_telegram gesperrt regime_inzwischen telegram gesperrt_telegram gesperrt_lange zensurfrei deutschen | | 663 | wien passierscheine ausgangssperre - passierscheine ausgangssperre verschickt - passierscheine ausgangssperre - blackout passierscheine ausgangssperre - passierscheine mitarbeiter wiener | 23 | 663_wien passierscheine ausgangssperre_passierscheine ausgangssperre verschickt_passierscheine ausgangssperre_blackout passierscheine ausgangssperre | | 664 | überlastung gesundheitswesens eigentlich - befürchteten überlastung gesundheitswesens - krankenhauseinweisungen wegen corona - überlastung gesundheitswesens - wegen corona überlastet | 23 | 664_überlastung gesundheitswesens eigentlich_befürchteten überlastung gesundheitswesens_krankenhauseinweisungen wegen corona_überlastung gesundheitswesens | | 665 | umwerfend abonnieren telegram - konstantin haslauers telegram - telegram förderer - telegramelite rundbrief abonnieren - links telegram kilezmore | 23 | 665_umwerfend abonnieren telegram_konstantin haslauers telegram_telegram förderer_telegramelite rundbrief abonnieren | | 666 | gefährlichen welt überleben - überleben deshalb unerlässlich - survival handbuch navy - schützen verteidigen praktischen - müssen gefährlichen welt | 23 | 666_gefährlichen welt überleben_überleben deshalb unerlässlich_survival handbuch navy_schützen verteidigen praktischen | | 667 | deutschland trotz maskenpflicht - meldet deutschland trotz - strenger impfpassregeln schulschliessungen - pandemie meldet deutschland - deutschland trotz | 23 | 667_deutschland trotz maskenpflicht_meldet deutschland trotz_strenger impfpassregeln schulschliessungen_pandemie meldet deutschland | | 668 | hervorgerufen impfung corona - österreichischen ärztekammer präsidenten - impfung corona ausgelöst - arzt dr andreas - möller facharzt innere | 23 | 668_hervorgerufen impfung corona_österreichischen ärztekammer präsidenten_impfung corona ausgelöst_arzt dr andreas | | 669 | lockerungen hoffe franzosen - frankreich zemmour le - frankreich zemmour - präsidentschaftswahlen macron möchte - franzosen | 23 | 669_lockerungen hoffe franzosen_frankreich zemmour le_frankreich zemmour_präsidentschaftswahlen macron möchte | | 670 | wassergehalt anfällig - geringen wassergehalt anfällig - panzerplatten genannt geringen - genannt geringen wassergehalt - scherzhaft panzerplatten genannt | 23 | 670_wassergehalt anfällig_geringen wassergehalt anfällig_panzerplatten genannt geringen_genannt geringen wassergehalt | | 671 | logistikbranche warnt - güterkraftverkehr logistik entsorgung - dieselpreise fordert bundesverband - dieselpreise fordert - euro dieselsäule ruft | 23 | 671_logistikbranche warnt_güterkraftverkehr logistik entsorgung_dieselpreise fordert bundesverband_dieselpreise fordert | | 672 | warum geht demokratie - demokratie bringen ausgerottet - sozialismus kanzlerstuhl warum - geht demokratie willen - demokratie willen volkes | 23 | 672_warum geht demokratie_demokratie bringen ausgerottet_sozialismus kanzlerstuhl warum_geht demokratie willen | | 673 | regierungsmitglied unwürdig türkis - inakzeptabel regierungsmitglied unwürdig - vollkommen inakzeptabel regierungsmitglied - inakzeptabel regierungsmitglied - demokratieverweigerer | 23 | 673_regierungsmitglied unwürdig türkis_inakzeptabel regierungsmitglied unwürdig_vollkommen inakzeptabel regierungsmitglied_inakzeptabel regierungsmitglied | | 674 | quelle video deutschen - video zusammenschnitt deutscher - video zusammenschnitt deutsch - video deutschen ut - video deutschen | 23 | 674_quelle video deutschen_video zusammenschnitt deutscher_video zusammenschnitt deutsch_video deutschen ut | | 675 | telegram youtube - telegram youtube twitter - telegram telegram youtube - facebook infokanal telegram - telegram gesamte video | 23 | 675_telegram youtube_telegram youtube twitter_telegram telegram youtube_facebook infokanal telegram | | 676 | vermögensverwalter gesellschaft blackrock - gesellschaft blackrock - investmentfonds blackrock - vermögensverwalter blackrock - vanguard blackrock | 22 | 676_vermögensverwalter gesellschaft blackrock_gesellschaft blackrock_investmentfonds blackrock_vermögensverwalter blackrock | | 677 | bundespräsident joachim gauck - freiheit ex bundespräsident - bundespräsident bekämpft energiekrise - alt bundespräsident - ehemalige bundespräsident | 22 | 677_bundespräsident joachim gauck_freiheit ex bundespräsident_bundespräsident bekämpft energiekrise_alt bundespräsident | | 678 | grössten umweltverschmutzer - co2 vergiftungen gesundheitlich - wurde maßnahmenkritikern schon - erkenntnis wurde maßnahmenkritikern - grössten umweltverschmutzer riesige | 22 | 678_grössten umweltverschmutzer_co2 vergiftungen gesundheitlich_wurde maßnahmenkritikern schon_erkenntnis wurde maßnahmenkritikern | | 679 | luciferians big pharma - pharma agenda life - wellness use jaco - fight back luciferians - luciferians | 22 | 679_luciferians big pharma_pharma agenda life_wellness use jaco_fight back luciferians | | 680 | solar powerbank ermöglicht - solar powerbank 20 - zuverlässige energieversorger solar - solarpanel taschenlampe kabellos - solar powerbank | 22 | 680_solar powerbank ermöglicht_solar powerbank 20_zuverlässige energieversorger solar_solarpanel taschenlampe kabellos | | 681 | selbstverteidigungsschirm erhalten spezialprodukt - multifunktionalen selbstverteidigungsschirm erhalten - multifunktionalen selbstverteidigungsschirm - stabilen multifunktionalen selbstverteidigungsschirm - selbstverteidigungsschirm extrem stabilen | 22 | 681_selbstverteidigungsschirm erhalten spezialprodukt_multifunktionalen selbstverteidigungsschirm erhalten_multifunktionalen selbstverteidigungsschirm_stabilen multifunktionalen selbstverteidigungsschirm | | 682 | wünsche geruhsame nacht - wünscht schönen abend - mitternacht gute nacht - star gute nacht - gute nacht ab | 22 | 682_wünsche geruhsame nacht_wünscht schönen abend_mitternacht gute nacht_star gute nacht | | 683 | gartenparty beim wintergrillen - wintergrillen sonstigen veranstaltung - anziehungspunkt gartenparty beim - gartenparty beim - wintergrillen sonstigen | 22 | 683_gartenparty beim wintergrillen_wintergrillen sonstigen veranstaltung_anziehungspunkt gartenparty beim_gartenparty beim | | 684 | krisenfall ausfällen energie - heizung löschautomatik petroleumheizung - energie gas stromversorgung - petroleumheizung mobile - löschautomatik petroleumheizung | 22 | 684_krisenfall ausfällen energie_heizung löschautomatik petroleumheizung_energie gas stromversorgung_petroleumheizung mobile | | 685 | dr sönnichsen anklagepunkten - amtsanmaßung schuldig gutes - bezirksgericht freigesprochen - dr sönnichsen - dr andreas sönnichsen | 22 | 685_dr sönnichsen anklagepunkten_amtsanmaßung schuldig gutes_bezirksgericht freigesprochen_dr sönnichsen | | 686 | berliner wirtschaftssenatorin fragwürdigen - rundfunks berlin brandenburg - rundfunks berlin - berliner wirtschaftssenatorin - rundfunk berlin | 22 | 686_berliner wirtschaftssenatorin fragwürdigen_rundfunks berlin brandenburg_rundfunks berlin_berliner wirtschaftssenatorin | | 687 | produktion biologischem löwenzahn - produktion löwenzahn extrakts - löwenzahn extrakt frischen - beliebtes kraut wegen - produktion löwenzahn | 22 | 687_produktion biologischem löwenzahn_produktion löwenzahn extrakts_löwenzahn extrakt frischen_beliebtes kraut wegen | | 688 | weiterlesen teilt beitrag - weiterlesen teilt - abonnieren weiterlesen - beitrag folgt - teilt beitrag folgt | 22 | 688_weiterlesen teilt beitrag_weiterlesen teilt_abonnieren weiterlesen_beitrag folgt | | 689 | visionen deutschland 2050 - ausgabe deutschland 2050 - schwerpunktthema deutschland 2050 - deutschland 2050 vorstellung - deutschland 2050 mehr | 22 | 689_visionen deutschland 2050_ausgabe deutschland 2050_schwerpunktthema deutschland 2050_deutschland 2050 vorstellung | | 690 | situationseinschätzung verfügen bürgerproteste - verfügen bürgerproteste vorgelassen - verfügen bürgerproteste - bürgerproteste vorgelassen - bürgerproteste vorgelassen neuerliche | 22 | 690_situationseinschätzung verfügen bürgerproteste_verfügen bürgerproteste vorgelassen_verfügen bürgerproteste_bürgerproteste vorgelassen | | 691 | uhr gesundheitspersonal protestiert - gesundheitspersonal protestiert lautstark - gesundheitspersonal protestiert - protest gesundheitsministerium wien - protestiert lautstark ärztekammer | 22 | 691_uhr gesundheitspersonal protestiert_gesundheitspersonal protestiert lautstark_gesundheitspersonal protestiert_protest gesundheitsministerium wien | | 692 | freiheit demokratie versammlung - start platz menschenrechte - freiheit demokratie ukraine - demonstration frieden freiheit - stephan harbarth genau | 22 | 692_freiheit demokratie versammlung_start platz menschenrechte_freiheit demokratie ukraine_demonstration frieden freiheit | | 693 | rückgang fallsterblichkeit jüngeren - rückgang fallsterblichkeit - erhöhte übersterblichkeit - assoziierten todesfälle - assoziierten todesfälle vereinheitlichen | 22 | 693_rückgang fallsterblichkeit jüngeren_rückgang fallsterblichkeit_erhöhte übersterblichkeit_assoziierten todesfälle | | 694 | krankheitsrisiko geimpften zweiten - hohes krankheitsrisiko geimpften - besonders hohes krankheitsrisiko - krankheitsrisiko geimpften - aufgefallen legen fakten | 22 | 694_krankheitsrisiko geimpften zweiten_hohes krankheitsrisiko geimpften_besonders hohes krankheitsrisiko_krankheitsrisiko geimpften | | 695 | 4200 mobiler gasheizofen - keramik gasheizofen kgh - mobiler gasheizofen - gasheizofen kgh 4200 - eingesetzt keramik gasheizofen | 22 | 695_4200 mobiler gasheizofen_keramik gasheizofen kgh_mobiler gasheizofen_gasheizofen kgh 4200 | | 696 | gasflaschenaufstellraum betrieb benötigte - platzsparenden verstauen gasflasche - gasflasche verfügt keramik - stahlblechgehäuse integriertem gasflaschenaufstellraum - integriertem gasflaschenaufstellraum | 22 | 696_gasflaschenaufstellraum betrieb benötigte_platzsparenden verstauen gasflasche_gasflasche verfügt keramik_stahlblechgehäuse integriertem gasflaschenaufstellraum | | 697 | impfpflicht österreich städte - 2021 österreich impfpflicht - österreich impfpflicht österreich - österreich willen geimpft - menschen ganz österreich | 22 | 697_impfpflicht österreich städte_2021 österreich impfpflicht_österreich impfpflicht österreich_österreich willen geimpft | | 698 | kanzlerpartei wählt lars - kanzlerpartei wählt - umgekehrt mehr parteien - spd parteitag - mehr parteien | 22 | 698_kanzlerpartei wählt lars_kanzlerpartei wählt_umgekehrt mehr parteien_spd parteitag | | 699 | telegram direkt website - versionen deinstallierst nachrichten - warnung lade telegram - versionen deinstallierst - alten versionen deinstallierst | 22 | 699_telegram direkt website_versionen deinstallierst nachrichten_warnung lade telegram_versionen deinstallierst | | 700 | oberösterreichischer grundrechtsaktivist - oberösterreichischer grundrechtsaktivist via - oberösterreichischer - della democrazia italia - erhalten via propagandanervt | 22 | 700_oberösterreichischer grundrechtsaktivist_oberösterreichischer grundrechtsaktivist via_oberösterreichischer_della democrazia italia | | 701 | radikale ukrainische nationalisten - spielt ukrainischen politischen - ukrainische nationalisten - erkennt ukrainisch nationalistische - gegenproteste russische minderheit | 22 | 701_radikale ukrainische nationalisten_spielt ukrainischen politischen_ukrainische nationalisten_erkennt ukrainisch nationalistische | | 702 | windkrafträder 2030 bauen - 000 windkrafträder 2030 - windkraft angekündigt 2030 - plan ausbau windkraft - ausbau windkraftanlagen | 22 | 702_windkrafträder 2030 bauen_000 windkrafträder 2030_windkraft angekündigt 2030_plan ausbau windkraft | | 703 | petromax teekessel hochwertigem - petromax marke feuer - petromax - hitzeverteilung petromax - perfekte hitzeverteilung petromax | 22 | 703_petromax teekessel hochwertigem_petromax marke feuer_petromax_hitzeverteilung petromax | | 704 | ärztekammerwahl besonders wichtiger - natürlich ärztekammerpräsidenten - natürlich ärztekammerpräsidenten szekeres - ärztekammerwahl besonders - sönnichsen natürlich ärztekammerpräsidenten | 22 | 704_ärztekammerwahl besonders wichtiger_natürlich ärztekammerpräsidenten_natürlich ärztekammerpräsidenten szekeres_ärztekammerwahl besonders | | 705 | menschen deutschland bunkerplätze - menschen deutschland nachziehen - aussage deutschlands außenministerin - deutschlands außenministerin annalena - bunker bauen lassen | 22 | 705_menschen deutschland bunkerplätze_menschen deutschland nachziehen_aussage deutschlands außenministerin_deutschlands außenministerin annalena | | 706 | leben unrechtsstaat kindesmissbrauch - unrechtsstaat kindesmissbrauch gesetze - unrechtsstaat kindesmissbrauch - darstellung kindesmissbrauchs bekommt - kindesmissbrauchs bekommt monate | 22 | 706_leben unrechtsstaat kindesmissbrauch_unrechtsstaat kindesmissbrauch gesetze_unrechtsstaat kindesmissbrauch_darstellung kindesmissbrauchs bekommt | | 707 | corona maßnahmen erleben - nebenwirkung corona - thema nebenwirkung corona - nebenwirkung corona impfungen - live streams videos | 22 | 707_corona maßnahmen erleben_nebenwirkung corona_thema nebenwirkung corona_nebenwirkung corona impfungen | | 708 | zensur hilfreiche aufklärungsvideos - findet hilfreiche aufklärungsvideos - hilfreiche aufklärungsvideos sowie - hilfreiche aufklärungsvideos - aufklärungsvideos sowie informationen | 22 | 708_zensur hilfreiche aufklärungsvideos_findet hilfreiche aufklärungsvideos_hilfreiche aufklärungsvideos sowie_hilfreiche aufklärungsvideos | | 709 | europa ernährungssouveränität brauche - europa ernährungssouveränität - klar europa ernährungssouveränität - unabhängige lebensmittelversorgung europas - ernährungssouveränität brauche | 22 | 709_europa ernährungssouveränität brauche_europa ernährungssouveränität_klar europa ernährungssouveränität_unabhängige lebensmittelversorgung europas | | 710 | taiwan militärisch - taiwan krieg gibt - taiwan militärisch unterstützt - bald taiwan krieg - taiwan krieg | 22 | 710_taiwan militärisch_taiwan krieg gibt_taiwan militärisch unterstützt_bald taiwan krieg | | 711 | medizinrecht fachbuchautorin de - medizinrecht autorin buches - fachanwältin medizinrecht autorin - bahner fachanwältin medizinrecht - medizinrecht fachbuchautorin | 22 | 711_medizinrecht fachbuchautorin de_medizinrecht autorin buches_fachanwältin medizinrecht autorin_bahner fachanwältin medizinrecht | | 712 | vollmilchpulver grundnahrungsmittel krisenvorsorge - haltbar bio vollmilchpulver - bio vollmilchpulver grundnahrungsmittel - bio vollmilchpulver dose - basics bio vollmilchpulver | 22 | 712_vollmilchpulver grundnahrungsmittel krisenvorsorge_haltbar bio vollmilchpulver_bio vollmilchpulver grundnahrungsmittel_bio vollmilchpulver dose | | 713 | reawaken america tour - america tour see - tour redmond april - america tour redmond - america tour | 22 | 713_reawaken america tour_america tour see_tour redmond april_america tour redmond | | 714 | twitch matthie banküberweisung - bitcoin bc1qzz8uwg8l96hpv5tmxvyjuxd8jxfy4macftsrpj stream - bitcoin bc1qzz8uwg8l96hpv5tmxvyjuxd8jxfy4macftsrpj - revolt21 bitcoin bc1qzz8uwg8l96hpv5tmxvyjuxd8jxfy4macftsrpj - facebook instagram twitch | 22 | 714_twitch matthie banküberweisung_bitcoin bc1qzz8uwg8l96hpv5tmxvyjuxd8jxfy4macftsrpj stream_bitcoin bc1qzz8uwg8l96hpv5tmxvyjuxd8jxfy4macftsrpj_revolt21 bitcoin bc1qzz8uwg8l96hpv5tmxvyjuxd8jxfy4macftsrpj | | 715 | mag beneder impfpflichtgesetz - verwaltungsstrafen wegen impfpflicht - verwaltungsstrafverfahren impfpflichtgesetz - impfpflicht 31 2022 - wegen impfpflicht 31 | 22 | 715_mag beneder impfpflichtgesetz_verwaltungsstrafen wegen impfpflicht_verwaltungsstrafverfahren impfpflichtgesetz_impfpflicht 31 2022 | | 716 | teuersten naturkatastrophenjahre schon - teuersten naturkatastrophenjahre - zufolge teuersten naturkatastrophenjahre - schäden naturkatastrophen weltweit - versicherten schäden naturkatastrophen | 22 | 716_teuersten naturkatastrophenjahre schon_teuersten naturkatastrophenjahre_zufolge teuersten naturkatastrophenjahre_schäden naturkatastrophen weltweit | | 717 | perestroika ausdruck ideologischen - eingeschlagen begreifen perestroika - perestroika täuschungwas - perestroika täuschungwas simpsons - dargestellt perestroika täuschung | 22 | 717_perestroika ausdruck ideologischen_eingeschlagen begreifen perestroika_perestroika täuschungwas_perestroika täuschungwas simpsons | | 718 | taiwan next - trump predicted taiwan - meeting taiwan president - china taiwan - taiwan china | 22 | 718_taiwan next_trump predicted taiwan_meeting taiwan president_china taiwan | | 719 | hildegard bingen klosterfrau - hildegard - hildegard bingen - bingen klosterfrau äbtissin - klosterfrau äbtissin | 22 | 719_hildegard bingen klosterfrau_hildegard_hildegard bingen_bingen klosterfrau äbtissin | | 720 | gewalt täter opfer - voller gewalt prof - gewalt täter - voller gewalt - gewalt un | 22 | 720_gewalt täter opfer_voller gewalt prof_gewalt täter_voller gewalt | | 721 | buch tipp germania - deutsche kulturgeschichte - deutsche kulturgeschichte 14 - germania zweitausendjahre deutsche - zweitausendjahre deutsche | 22 | 721_buch tipp germania_deutsche kulturgeschichte_deutsche kulturgeschichte 14_germania zweitausendjahre deutsche | | 722 | tv online kongress - mittelerde tv - online kongress liebe - mittelerde tv online - webseite mittelerde tv | 22 | 722_tv online kongress_mittelerde tv_online kongress liebe_mittelerde tv online | | 723 | florida müsst masken - florida lächerlich sagt - gouverneur florida lächerlich - wow gouverneur florida - müsst masken | 22 | 723_florida müsst masken_florida lächerlich sagt_gouverneur florida lächerlich_wow gouverneur florida | | 724 | zusicherung betrifft zwangsmaßnahmen - zusicherung passé bitte - betrifft zwangsmaßnahmen beugehaft - betrifft zwangsmaßnahmen - beugehaft eingesetzt darf | 22 | 724_zusicherung betrifft zwangsmaßnahmen_zusicherung passé bitte_betrifft zwangsmaßnahmen beugehaft_betrifft zwangsmaßnahmen | | 725 | leitet nachricht bitte - bestellt habt schreibt - rundbriefabo folgt - sobald bestellt habt - folgt rundbriefabo | 22 | 725_leitet nachricht bitte_bestellt habt schreibt_rundbriefabo folgt_sobald bestellt habt | | 726 | heater mobile gasheizung - mobile gasheizung inkl - mobile gasheizung ebenfalls - lüftung mobile gasheizung - mobile gasheizung | 22 | 726_heater mobile gasheizung_mobile gasheizung inkl_mobile gasheizung ebenfalls_lüftung mobile gasheizung | | 727 | ukraine krieg wer - eigentlich ukraine krieg - krieg politik tun - krieg politik - wetter krieg politik | 22 | 727_ukraine krieg wer_eigentlich ukraine krieg_krieg politik tun_krieg politik | | 728 | neue semester gegenuni - neue semester - kurse vergangenen semestern - startet neue semester - semester gegenuni diesmal | 22 | 728_neue semester gegenuni_neue semester_kurse vergangenen semestern_startet neue semester | | 729 | guardian purifier wasserfilter - modernste mobile wasserfilter - guardian wasserfilter pump - wasserfilter welt sicherer - purifier wasserfilter globetrotter | 22 | 729_guardian purifier wasserfilter_modernste mobile wasserfilter_guardian wasserfilter pump_wasserfilter welt sicherer | | 730 | westukraine seit 1960er - staat ukraine drehscheibe - ukrainischer nationalisten ssgalicia - westukraine seit - denen cia ukraine | 22 | 730_westukraine seit 1960er_staat ukraine drehscheibe_ukrainischer nationalisten ssgalicia_westukraine seit | | 731 | spricht statt brunner - beneder spricht - spricht statt - spricht - änderung beneder spricht | 22 | 731_spricht statt brunner_beneder spricht_spricht statt_spricht | | 732 | sorgten unbekannte flugobjekte - ufo abschüsse große - unbekannte flugobjekte tatsächlich - ufo abschüsse - neuen flugobjekte zylindrische | 22 | 732_sorgten unbekannte flugobjekte_ufo abschüsse große_unbekannte flugobjekte tatsächlich_ufo abschüsse | | 733 | cdl erzielt einflussreiche - cdl löst blutgerinnsel - sogar sporen cdl - cdl preiswert jedermann - liegt problem cdl | 22 | 733_cdl erzielt einflussreiche_cdl löst blutgerinnsel_sogar sporen cdl_cdl preiswert jedermann | | 734 | verkehrswende - mai sensible fahrzeugdaten - umweltministerin tempolimit durchzusetzen - autobranche - sichtnun weit autofahren | 22 | 734_verkehrswende_mai sensible fahrzeugdaten_umweltministerin tempolimit durchzusetzen_autobranche | | 735 | vermögensverwaltungen big - zusammenballung vermögensverwaltungen big - nennt zusammenballung vermögensverwaltungen - geld größte macht - vermögensverwaltungen big tech | 22 | 735_vermögensverwaltungen big_zusammenballung vermögensverwaltungen big_nennt zusammenballung vermögensverwaltungen_geld größte macht | | 736 | gulaschkanone eintopfofen grill - grill reduziert versandkostenfrei - holzgriff liter grillrost - eintopfofen grill - eintopfofen grill reduziert | 22 | 736_gulaschkanone eintopfofen grill_grill reduziert versandkostenfrei_holzgriff liter grillrost_eintopfofen grill | | 737 | hetzt widerstand friedlichen - friedlichen widerstand gewinnen - widerstand friedlichen - widerstand friedlichen verlassen - sagen widerstand härter | 22 | 737_hetzt widerstand friedlichen_friedlichen widerstand gewinnen_widerstand friedlichen_widerstand friedlichen verlassen | | 738 | leute offiziell beim - video erzählt nachricht - offiziell beim - ablauf video erzählt - infos ablauf video | 22 | 738_leute offiziell beim_video erzählt nachricht_offiziell beim_ablauf video erzählt | | 739 | verordnungen gesetzeswidrig dennoch - verordnungen schaden genommen - 20 verordnungen gesundheitsministeriums - verordnungen gesundheitsministeriums 2021 - verordnungen verfassungswidrig wurden | 22 | 739_verordnungen gesetzeswidrig dennoch_verordnungen schaden genommen_20 verordnungen gesundheitsministeriums_verordnungen gesundheitsministeriums 2021 | | 740 | white supremacist - weißen mann ausgetrieben - weißer mann - weißen mann - rassismus weiße | 21 | 740_white supremacist_weißen mann ausgetrieben_weißer mann_weißen mann | | 741 | video charkow ukrainische - ukrainischer soldat dabei - ukrainischer soldat - bestätigt ___ украинский - charkow ukrainische | 21 | 741_video charkow ukrainische_ukrainischer soldat dabei_ukrainischer soldat_bestätigt ___ украинский | | 742 | tatsächlich gesetzentwurf impfpflicht - entwurf impfpflichtgesetzes gelten - gesetzentwurf allgemeine impfpflicht - gesetzentwurf impfpflicht - impfpflichtgesetzes | 21 | 742_tatsächlich gesetzentwurf impfpflicht_entwurf impfpflichtgesetzes gelten_gesetzentwurf allgemeine impfpflicht_gesetzentwurf impfpflicht | | 743 | corona impfpflicht österreichs - impfpflicht österreichs regierung - corona impfpflicht österreich - impfpflicht österreichs - regierung corona impfpflicht | 21 | 743_corona impfpflicht österreichs_impfpflicht österreichs regierung_corona impfpflicht österreich_impfpflicht österreichs | | 744 | boden zerstörten deutschland - zerstörten deutschland - deutsch german audio - ende deutschland - zerstörten deutschland ende | 21 | 744_boden zerstörten deutschland_zerstörten deutschland_deutsch german audio_ende deutschland | | 745 | arizona senator wendy - arizona senator - senator wendy - conservatives - election integrity | 21 | 745_arizona senator wendy_arizona senator_senator wendy_conservatives | | 746 | personal greetings germany - greetings germany - greetings germany go - germany go patriots - germany | 21 | 746_personal greetings germany_greetings germany_greetings germany go_germany go patriots | | 747 | unterwegs 2g - unterwegs fahranfänger 2g - spö zib2 nahe - beweist niederösterreichischer bauunternehmer - spö zib2 | 21 | 747_unterwegs 2g_unterwegs fahranfänger 2g_spö zib2 nahe_beweist niederösterreichischer bauunternehmer | | 748 | ukraine zukunft - ukraine zukunft überhaupt - lastwagen anhänger südukraine - wagenknecht ukraine - anhänger südukraine | 21 | 748_ukraine zukunft_ukraine zukunft überhaupt_lastwagen anhänger südukraine_wagenknecht ukraine | | 749 | sofern mainstreammedien übernommen - datensatz öffentlich zurückziehen - mainstreammedien übernommen - mainstreammedien übernommen wurden - mainstreammedien | 21 | 749_sofern mainstreammedien übernommen_datensatz öffentlich zurückziehen_mainstreammedien übernommen_mainstreammedien übernommen wurden | | 750 | schnee wind selbstverteidigungsschirm - schirm trotzt stürmen - selbstverteidigungsschirm robuste schirm - selbstverteidigungsschirm erhalten spezialprodukt - wind selbstverteidigungsschirm | 21 | 750_schnee wind selbstverteidigungsschirm_schirm trotzt stürmen_selbstverteidigungsschirm robuste schirm_selbstverteidigungsschirm erhalten spezialprodukt | | 751 | dutch oven hervorragend - kochwunder dutch oven - dutch oven wunder - unendlich dutch oven - dutch oven echtes | 21 | 751_dutch oven hervorragend_kochwunder dutch oven_dutch oven wunder_unendlich dutch oven | | 752 | lokals kontrollieren restaurantleiter - kontrollieren restaurantleiter - restaurantleiter sieht vorfall - kontrollieren restaurantleiter besteht - kontrolle polizei restaurant | 21 | 752_lokals kontrollieren restaurantleiter_kontrollieren restaurantleiter_restaurantleiter sieht vorfall_kontrollieren restaurantleiter besteht | | 753 | schönen oberaargau schweiz - oberaargau schweiz winterliche - gstaad schweiz abendgrüße - schönen schweiz wintergruß - schweiz abendgrüsse | 21 | 753_schönen oberaargau schweiz_oberaargau schweiz winterliche_gstaad schweiz abendgrüße_schönen schweiz wintergruß | | 754 | zensursicheren rundbrief abonnieren - unterstützt förderer zensursicheren - zensursicheren rundbrief - zensursicheren - förderer zensursicheren rundbrief | 21 | 754_zensursicheren rundbrief abonnieren_unterstützt förderer zensursicheren_zensursicheren rundbrief_zensursicheren | | 755 | denen öffentliche debatte - debatte mentalen gleichschritt - nuhr rechnet debatten - bunt gefragt - antwortet buch | 21 | 755_denen öffentliche debatte_debatte mentalen gleichschritt_nuhr rechnet debatten_bunt gefragt | | 756 | wochen nahezu proteste - corona maßnahmen dürfen - corona agenda dahinter - gegner corona maßnahmen - entsetzt totalitäre | 21 | 756_wochen nahezu proteste_corona maßnahmen dürfen_corona agenda dahinter_gegner corona maßnahmen | | 757 | pressekonferenz mfg österreich - bewegung start oberösterreich - mfg österreich motto - 2021 ging mfg - setzte herbst 2021 | 21 | 757_pressekonferenz mfg österreich_bewegung start oberösterreich_mfg österreich motto_2021 ging mfg | | 758 | banküberweisung ovalmedia berlin - numbers playlist telegram - unterstützung banküberweisung ovalmedia - numbers paypal - verwendungszweck numbers paypal | 21 | 758_banküberweisung ovalmedia berlin_numbers playlist telegram_unterstützung banküberweisung ovalmedia_numbers paypal | | 759 | migration migrationswaffe zerstöre - gute böse migrationsdebatte - migranten aufnehmen egal - migrationsdebatte - migration geht migrationswaffe | 21 | 759_migration migrationswaffe zerstöre_gute böse migrationsdebatte_migranten aufnehmen egal_migrationsdebatte | | 760 | nachfolgende videobeschreibung natascha - videobeschreibung natascha übernommen - videobeschreibung natascha - nachfolgende videobeschreibung - kommt sonntag | 21 | 760_nachfolgende videobeschreibung natascha_videobeschreibung natascha übernommen_videobeschreibung natascha_nachfolgende videobeschreibung | | 761 | org animal spirit - hilfe animal spirit - animal spirit sicher - animal spirit leben - gnadenhöfen animal spirit | 21 | 761_org animal spirit_hilfe animal spirit_animal spirit sicher_animal spirit leben | | 762 | epochaler korruptionsskandal - abkassieren epochaler korruptionsskandal - korruptionsskandal begleitet - epochaler korruptionsskandal begleitet - korruptionsskandal | 21 | 762_epochaler korruptionsskandal_abkassieren epochaler korruptionsskandal_korruptionsskandal begleitet_epochaler korruptionsskandal begleitet | | 763 | scheuklappen journalismus reagieren - journalismus reagieren - linkspolitische schlagseite aufmerksame - forschungsinstituts media tenor - beispiele scheuklappen journalismus | 21 | 763_scheuklappen journalismus reagieren_journalismus reagieren_linkspolitische schlagseite aufmerksame_forschungsinstituts media tenor | | 764 | komplett siehe team - beschreibung squad stiefel - siehe team - beschreibung squad - finden beschreibung squad | 21 | 764_komplett siehe team_beschreibung squad stiefel_siehe team_beschreibung squad | | 765 | strobl politologin michael - michael brunner mfg - politologin michael - grundrechte jana zellhofer - brunner mfg manfred | 21 | 765_strobl politologin michael_michael brunner mfg_politologin michael_grundrechte jana zellhofer | | 766 | warnt kraftwerksausfällen wintererdgas - kraftwerksausfällen wintererdgas - gasknappheit - gaskraftwerke energiewende retten - setzt unbeirrt gaskraftwerke | 21 | 766_warnt kraftwerksausfällen wintererdgas_kraftwerksausfällen wintererdgas_gasknappheit_gaskraftwerke energiewende retten | | 767 | fake news weltweit - fake news media - fake news - gruppierung fake news - fake news fake | 21 | 767_fake news weltweit_fake news media_fake news_gruppierung fake news | | 768 | individuell durchzustehen sorge - kopf bringt sorgen - leben geht bewältigung - ängste herausforderungen größer - bringt sorgen | 21 | 768_individuell durchzustehen sorge_kopf bringt sorgen_leben geht bewältigung_ängste herausforderungen größer | | 769 | ausweisung russischer diplomaten - russischen diplomatischen vertretungen - russischer diplomaten wurden - russische diplomaten - russischer diplomaten | 21 | 769_ausweisung russischer diplomaten_russischen diplomatischen vertretungen_russischer diplomaten wurden_russische diplomaten | | 770 | akkukapazitäten kraftpaket stundenlange - akkukapazitäten kraftpaket - enormer akkukapazitäten kraftpaket - kraftpaket stundenlange energieversorgung - batteriegespeisten stromgeneratoren powerstation | 21 | 770_akkukapazitäten kraftpaket stundenlange_akkukapazitäten kraftpaket_enormer akkukapazitäten kraftpaket_kraftpaket stundenlange energieversorgung | | 771 | 10 days darkness - january 28 10 - feb 14 - feb 14 feb - calendar feb 14 | 21 | 771_10 days darkness_january 28 10_feb 14_feb 14 feb | | 772 | chelsea ukrainekrieg - weitere russische oligarchen - club chelsea ukrainekrieg - israelisch russische oligarch - eingefroren transaktionen britischen | 21 | 772_chelsea ukrainekrieg_weitere russische oligarchen_club chelsea ukrainekrieg_israelisch russische oligarch | | 773 | österreich beiträge videos - nachdem kritische videos - aufklärung österreich beiträge - videos frieden freiheit - kritische videos yt | 21 | 773_österreich beiträge videos_nachdem kritische videos_aufklärung österreich beiträge_videos frieden freiheit | | 774 | truths unrevealing - uncover hidden truths - wahrheit unzerstörbar - treffen wahrheit lüge - mal wahrheitsgetreu | 21 | 774_truths unrevealing_uncover hidden truths_wahrheit unzerstörbar_treffen wahrheit lüge | | 775 | findest musikvideo - song mainstream plattformen - streame song mainstream - verbreite song - song mainstream | 21 | 775_findest musikvideo_song mainstream plattformen_streame song mainstream_verbreite song | | 776 | 12 2021 magdeburg - 2021 magdeburg - magdeburg 06 - magdeburg 13 02 - magdeburg 06 02 | 21 | 776_12 2021 magdeburg_2021 magdeburg_magdeburg 06_magdeburg 13 02 | | 777 | handel sowohl russland - ausfuhr düngemitteln - peking sanktionen russland - sanktionen russland beteiligen - russland produziert | 21 | 777_handel sowohl russland_ausfuhr düngemitteln_peking sanktionen russland_sanktionen russland beteiligen | | 778 | sagen kostenlos abonnieren - abonnieren sagen kostenlos - vorgewagt kostenlos abonnieren - kostenlos abonnieren sagen - kostenlos abonnieren purer | 21 | 778_sagen kostenlos abonnieren_abonnieren sagen kostenlos_vorgewagt kostenlos abonnieren_kostenlos abonnieren sagen | | 779 | kanada abendgrüße - kanada abendgrüße northumberlandstrait - liebe grüße kanada - lichtgrüße kanada - lieben kanadier kommen | 21 | 779_kanada abendgrüße_kanada abendgrüße northumberlandstrait_liebe grüße kanada_lichtgrüße kanada | | 780 | toskana parkplatz 19 - rohrbach 18 00 - uhr stainz hauptplatz - stainz hauptplatz 18 - uhr demo rohrbach | 21 | 780_toskana parkplatz 19_rohrbach 18 00_uhr stainz hauptplatz_stainz hauptplatz 18 | | 781 | immunität aktuellen covid - covid 19 grundimmunisierung - 19 impfstoffen weit - impfstoffen weit überlegen - 19 grundimmunisierung schutz | 21 | 781_immunität aktuellen covid_covid 19 grundimmunisierung_19 impfstoffen weit_impfstoffen weit überlegen | | 782 | impfstoff zerstört anstatt - impfungen organe betreffen - körperzellen impfstoff zerstört - ergeben hauptbestandteile impfstoffs - impfstoff zerstört | 21 | 782_impfstoff zerstört anstatt_impfungen organe betreffen_körperzellen impfstoff zerstört_ergeben hauptbestandteile impfstoffs | | 783 | zentrum pandemie impfstoffe - pandemie impfstoffe - erkrankungen bekannten impfnebenwirkungen - pandemie impfstoffe therapeutika - vakzin komplikation hervorrufen | 21 | 783_zentrum pandemie impfstoffe_pandemie impfstoffe_erkrankungen bekannten impfnebenwirkungen_pandemie impfstoffe therapeutika | | 784 | demonstration freitag 11 - demonstration freitag - kundgebung demonstration freitag - demonstration - veranstalteten kundgebung demonstration | 21 | 784_demonstration freitag 11_demonstration freitag_kundgebung demonstration freitag_demonstration | | 785 | use permitted copyright - permitted copyright statute - permitted copyright - copyright statute - use copyrighted materials | 21 | 785_use permitted copyright_permitted copyright statute_permitted copyright_copyright statute | | 786 | meint freiheit vielmehr - zwängen meint freiheit - freiheit thema tagen - freiheit thema - meint freiheit | 21 | 786_meint freiheit vielmehr_zwängen meint freiheit_freiheit thema tagen_freiheit thema | | 787 | prof dr harald - prof dr günter - ch prof dr - prof dr jörg - dr rené kegelmann | 21 | 787_prof dr harald_prof dr günter_ch prof dr_prof dr jörg | | 788 | wichtiges video impfpflicht - video impfpflicht befürchten - video impfpflicht - klartext impfpflicht bitte - spricht klartext impfpflicht | 21 | 788_wichtiges video impfpflicht_video impfpflicht befürchten_video impfpflicht_klartext impfpflicht bitte | | 789 | heilpraktiker ärzte rosenheim - ärzte rosenheim dr - rosenheim dr med - rosenheim dr - krenn heilpraktikerin gründerin | 21 | 789_heilpraktiker ärzte rosenheim_ärzte rosenheim dr_rosenheim dr med_rosenheim dr | | 790 | korsika proteste ausschreitungen - korsika proteste - proteste ausschreitungen französische - ausschreitungen französische zentralgewaltauf - zentralgewaltauf französischen mittelmeerinsel | 20 | 790_korsika proteste ausschreitungen_korsika proteste_proteste ausschreitungen französische_ausschreitungen französische zentralgewaltauf | | 791 | protesttag impfzwang statt - bundesweiter aktions protesttag - woche impfzwang demonstrieren - aktions protesttag - protesttag | 20 | 791_protesttag impfzwang statt_bundesweiter aktions protesttag_woche impfzwang demonstrieren_aktions protesttag | | 792 | jaco10 pine pollen - pine pollen coffee - pollen coffee - article pine pollen - pine pollen article | 20 | 792_jaco10 pine pollen_pine pollen coffee_pollen coffee_article pine pollen | | 793 | chinesische spionageballons - chinese spy balloons - reporting chinese balloons - spy balloons spy - spy balloons | 20 | 793_chinesische spionageballons_chinese spy balloons_reporting chinese balloons_spy balloons spy | | 794 | woodcraft camping legendäre - woodcraft camping - freien woodcraft camping - woodcraft camping urvaters - verfügung woodcraft camping | 20 | 794_woodcraft camping legendäre_woodcraft camping_freien woodcraft camping_woodcraft camping urvaters | | 795 | wildgebieten wasserfilter hält - schützt zudem viren - entwicklungsländern wildgebieten wasserfilter - wasserfilter hält extrem - wildgebieten wasserfilter | 20 | 795_wildgebieten wasserfilter hält_schützt zudem viren_entwicklungsländern wildgebieten wasserfilter_wasserfilter hält extrem | | 796 | kryptowährungen bitcoin 1khhuud2q85mmu4kp6rw13q9xthri7v1u7 - bitcoin 1khhuud2q85mmu4kp6rw13q9xthri7v1u7 ether - bitcoin 1khhuud2q85mmu4kp6rw13q9xthri7v1u7 - digistore24 kryptowährungen bitcoin - kryptowährungen bitcoin | 20 | 796_kryptowährungen bitcoin 1khhuud2q85mmu4kp6rw13q9xthri7v1u7_bitcoin 1khhuud2q85mmu4kp6rw13q9xthri7v1u7 ether_bitcoin 1khhuud2q85mmu4kp6rw13q9xthri7v1u7_digistore24 kryptowährungen bitcoin | | 797 | verbliebene minderheit ungeimpften - lautstarke minderheit radikal - minderheit radikal vorgeht - minderheit ungeimpften - lautstarke minderheit | 20 | 797_verbliebene minderheit ungeimpften_lautstarke minderheit radikal_minderheit radikal vorgeht_minderheit ungeimpften | | 798 | betroffenen anlagen satellitenkommunikation - störungen satelliten netzwerken - störungen satelliten - satellitenstörung - anlagen satellitenkommunikation angebunden | 20 | 798_betroffenen anlagen satellitenkommunikation_störungen satelliten netzwerken_störungen satelliten_satellitenstörung | | 799 | mobiltelefons verfügung freeplay - usb kabel möglich - kabel möglich - ladung mobiltelefons - mobiltelefons | 20 | 799_mobiltelefons verfügung freeplay_usb kabel möglich_kabel möglich_ladung mobiltelefons | | 800 | tennis star - tennis damen sagte - tennis tímea babos - tennisstar - französischer tennisstar | 20 | 800_tennis star_tennis damen sagte_tennis tímea babos_tennisstar | | 801 | nachwort italienischen philosophen - italienischen philosophen giorgio - italienischen philosophen - herausgegebene essayband konnte - philosophen giorgio | 20 | 801_nachwort italienischen philosophen_italienischen philosophen giorgio_italienischen philosophen_herausgegebene essayband konnte | | 802 | positiven testergebnissen überlastung - testergebnissen überlastung - positiv getestet trotz - testergebnissen überlastung spitäler - positiv getestet sonden | 20 | 802_positiven testergebnissen überlastung_testergebnissen überlastung_positiv getestet trotz_testergebnissen überlastung spitäler | | 803 | recht impfung abzulehnen - deutschen justiz schikane - impfpflicht wäre verfassungswidrig - impfung abzulehnen impfpflicht - berlin strafanzeige | 20 | 803_recht impfung abzulehnen_deutschen justiz schikane_impfpflicht wäre verfassungswidrig_impfung abzulehnen impfpflicht | | 804 | kino schaffte drehbuch - kino vlt lauten - kino schaffte - lief kinos - lief kinos schade | 20 | 804_kino schaffte drehbuch_kino vlt lauten_kino schaffte_lief kinos | | 805 | großstörung deutschen telekom - telekom großraum frankfurt - deutschen telekom düsseldorf - deutschen telekom - gestört mehrere mobilfunkstationen | 20 | 805_großstörung deutschen telekom_telekom großraum frankfurt_deutschen telekom düsseldorf_deutschen telekom | | 806 | autofahrer zerren klimakleber - straßenblockaden berlin - autofahrer sitzblockaden provoziert - autofahrer sitzblockaden - klima terroristen | 20 | 806_autofahrer zerren klimakleber_straßenblockaden berlin_autofahrer sitzblockaden provoziert_autofahrer sitzblockaden | | 807 | denen gegenwärtige politik - bundeskanzleramt anwesenheit landeshauptleute - unruhe bald groß - menschen denen gegenwärtige - brennt unruhe bald | 20 | 807_denen gegenwärtige politik_bundeskanzleramt anwesenheit landeshauptleute_unruhe bald groß_menschen denen gegenwärtige | | 808 | journalist opfer völlig - gez journalist erlebt - berichten journalist öffentlich - journalist opfer - journalist erlebt | 20 | 808_journalist opfer völlig_gez journalist erlebt_berichten journalist öffentlich_journalist opfer | | 809 | gründete project veritas - project veritas - entlarvt project veritas - project veritas jahr - gemeinnütziges journalistisches unternehmen | 20 | 809_gründete project veritas_project veritas_entlarvt project veritas_project veritas jahr | | 810 | walpurga interview wien - walpurga interview - walter kammerhofer kürzlich - mikl leitner walpurgas - leitner walpurgas | 20 | 810_walpurga interview wien_walpurga interview_walter kammerhofer kürzlich_mikl leitner walpurgas | | 811 | ignazbearth solidarität aufklärungshelden - solidarität aufklärungshelden - jemand solidarisch wäre - grüße solidarität arne - jemand solidarisch | 20 | 811_ignazbearth solidarität aufklärungshelden_solidarität aufklärungshelden_jemand solidarisch wäre_grüße solidarität arne | | 812 | mobile heizstrahler gaskartuschen - dreifuß gaskartusche mobile - gaskartuschen praktische kompakte - gaskartusche mobile - gaskartusche mobile heizstrahler | 20 | 812_mobile heizstrahler gaskartuschen_dreifuß gaskartusche mobile_gaskartuschen praktische kompakte_gaskartusche mobile | | 813 | ende aussetzung impfpflicht - aussetzung impfpflicht reine - impfpflicht darf schublade - aussetzung impfpflicht - davon ausgehen impfpflicht | 20 | 813_ende aussetzung impfpflicht_aussetzung impfpflicht reine_impfpflicht darf schublade_aussetzung impfpflicht | | 814 | parteinahme wiener polizei - wiener polizei zahlreichen - schläger polizei sehen - polizei sehen audio - polizei sehen | 20 | 814_parteinahme wiener polizei_wiener polizei zahlreichen_schläger polizei sehen_polizei sehen audio | | 815 | robusten widerstandsfähigen umrüstgasschlauch - widerstandsfähigen umrüstgasschlauch handelsüblichen - widerstandsfähigen umrüstgasschlauch - umrüstgasschlauch handelsüblichen - umrüstgasschlauch handelsüblichen 11kg | 20 | 815_robusten widerstandsfähigen umrüstgasschlauch_widerstandsfähigen umrüstgasschlauch handelsüblichen_widerstandsfähigen umrüstgasschlauch_umrüstgasschlauch handelsüblichen | | 816 | gates ex wife - sagte melinda gates - bill gates ex - melinda gates - bill gates öffentlich | 20 | 816_gates ex wife_sagte melinda gates_bill gates ex_melinda gates | | 817 | neuinfektionen 27 todesfälle - 38 060 neuinfektionen - neuinfektionen 26 todesfälle - neuinfektionen 40 todesfälle - 957 fälle coronavirus | 20 | 817_neuinfektionen 27 todesfälle_38 060 neuinfektionen_neuinfektionen 26 todesfälle_neuinfektionen 40 todesfälle | | 818 | impfungen interviewt leidvollen - beeinträchtigungen impfungen interviewt - impfungen interviewt - startet dokumentarfilm geimpft - 2022 startet dokumentarfilm | 20 | 818_impfungen interviewt leidvollen_beeinträchtigungen impfungen interviewt_impfungen interviewt_startet dokumentarfilm geimpft | | 819 | mitverfolgen russischen soldaten - russischen soldaten greifen - russischen soldaten tun - vorgeht russischen soldaten - russischen soldaten | 20 | 819_mitverfolgen russischen soldaten_russischen soldaten greifen_russischen soldaten tun_vorgeht russischen soldaten | | 820 | kaum övp korruptionsuntersuchungsausschuss - övp korruptionsuntersuchungsausschuss - korruptionsuntersuchungsausschuss - övp korruptionsuntersuchungsausschuss begonnen - korruptionsuntersuchungsausschuss begonnen zeigt | 20 | 820_kaum övp korruptionsuntersuchungsausschuss_övp korruptionsuntersuchungsausschuss_korruptionsuntersuchungsausschuss_övp korruptionsuntersuchungsausschuss begonnen | | 821 | original storm kettle - usw wasser kochen - storm kettle - storm kettle kommt - wasser kochen windigstem | 20 | 821_original storm kettle_usw wasser kochen_storm kettle_storm kettle kommt | | 822 | entweichen fermentierglas gelingen - fermentierglas gelingen selbsteingelegten - entweichen fermentierglas - fermentierglas gelingen - glas gase hingegen | 20 | 822_entweichen fermentierglas gelingen_fermentierglas gelingen selbsteingelegten_entweichen fermentierglas_fermentierglas gelingen | | 823 | hochkorrupten deutschen krankensystem - überall hochkorrupten deutschen - deutschland millionen betrugsfälle - hochkorrupten deutschen - aufgedeckt deutschland millionen | 20 | 823_hochkorrupten deutschen krankensystem_überall hochkorrupten deutschen_deutschland millionen betrugsfälle_hochkorrupten deutschen | | 824 | freiheitsbewegung finde etappensieg - freiheitsbewegung finde - wien menschen freiheit - feiern massiven widerstand - parteien fpö mfg | 20 | 824_freiheitsbewegung finde etappensieg_freiheitsbewegung finde_wien menschen freiheit_feiern massiven widerstand | | 825 | verquere logik olaf - leiter olaf scholz - olaf scholz infektionsgeschehen - bundeskanzler alexander schallenberg - breuer leiter olaf | 20 | 825_verquere logik olaf_leiter olaf scholz_olaf scholz infektionsgeschehen_bundeskanzler alexander schallenberg | | 826 | berlin genderzwang schulen - berlin wer gendert - berlin genderzwang - umsetzung genderideologien schulen - genderideologien schulen geht | 20 | 826_berlin genderzwang schulen_berlin wer gendert_berlin genderzwang_umsetzung genderideologien schulen | | 827 | flüchtlingen deutschland ex - unesco friedenspreis ausgezeichnet - kulturorganisation unesco flüchtlingspolitik - flüchtlingen mutige entscheidung - flüchtlingspolitik 2015 | 20 | 827_flüchtlingen deutschland ex_unesco friedenspreis ausgezeichnet_kulturorganisation unesco flüchtlingspolitik_flüchtlingen mutige entscheidung | | 828 | trinkwasserqualität mobilen osmoseanlage - trinkwasserqualität mobilen - trinkwasserqualität innovative - beste wasser gesundes - maximale trinkwasserqualität mobilen | 20 | 828_trinkwasserqualität mobilen osmoseanlage_trinkwasserqualität mobilen_trinkwasserqualität innovative_beste wasser gesundes | </details> ## Training hyperparameters * calculate_probabilities: True * language: multilingual * low_memory: False * min_topic_size: 10 * n_gram_range: (1, 1) * nr_topics: None * seed_topic_list: None * top_n_words: 10 * verbose: True * zeroshot_min_similarity: 0.7 * zeroshot_topic_list: None ## Framework versions * Numpy: 1.25.2 * HDBSCAN: 0.8.33 * UMAP: 0.5.6 * Pandas: 1.5.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.6.1 * Transformers: 4.38.2 * Numba: 0.58.1 * Plotly: 5.15.0 * Python: 3.10.12
{"library_name": "bertopic", "tags": ["bertopic"], "pipeline_tag": "text-classification"}
RolMax/impf_ukrain_postcov_all_sns_topics_umap_lok_hdbscan_lok_ctfidf_seed_14_prob
null
[ "bertopic", "text-classification", "region:us" ]
null
2024-04-17T11:07:18+00:00
[]
[]
TAGS #bertopic #text-classification #region-us
impf\_ukrain\_postcov\_all\_sns\_topics\_umap\_lok\_hdbscan\_lok\_ctfidf\_seed\_14\_prob ======================================================================================== This is a BERTopic model. BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets. Usage ----- To use this model, please install BERTopic: You can use the model as follows: Topic overview -------------- * Number of topics: 830 * Number of training documents: 91393 Click here for an overview of all topics. Training hyperparameters ------------------------ * calculate\_probabilities: True * language: multilingual * low\_memory: False * min\_topic\_size: 10 * n\_gram\_range: (1, 1) * nr\_topics: None * seed\_topic\_list: None * top\_n\_words: 10 * verbose: True * zeroshot\_min\_similarity: 0.7 * zeroshot\_topic\_list: None Framework versions ------------------ * Numpy: 1.25.2 * HDBSCAN: 0.8.33 * UMAP: 0.5.6 * Pandas: 1.5.3 * Scikit-Learn: 1.2.2 * Sentence-transformers: 2.6.1 * Transformers: 4.38.2 * Numba: 0.58.1 * Plotly: 5.15.0 * Python: 3.10.12
[]
[ "TAGS\n#bertopic #text-classification #region-us \n" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [baichuan-inc/Baichuan2-7B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-7B-Chat) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 1.0 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.1+cu118 - Tokenizers 0.15.2
{"library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "baichuan-inc/Baichuan2-7B-Chat", "model-index": [{"name": "output", "results": []}]}
hawkling/output
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:baichuan-inc/Baichuan2-7B-Chat", "region:us" ]
null
2024-04-17T11:07:29+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-baichuan-inc/Baichuan2-7B-Chat #region-us
# output This model is a fine-tuned version of baichuan-inc/Baichuan2-7B-Chat on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 1.0 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.1.1+cu118 - Tokenizers 0.15.2
[ "# output\n\nThis model is a fine-tuned version of baichuan-inc/Baichuan2-7B-Chat on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n- lr_scheduler_type: constant\n- num_epochs: 1.0", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.1+cu118\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-baichuan-inc/Baichuan2-7B-Chat #region-us \n", "# output\n\nThis model is a fine-tuned version of baichuan-inc/Baichuan2-7B-Chat on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08\n- lr_scheduler_type: constant\n- num_epochs: 1.0", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.39.3\n- Pytorch 2.1.1+cu118\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/dumbo-krillin41
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T11:08:23+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # DreamBooth - Hibon/dogbooth This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers"], "base_model": "stabilityai/stable-diffusion-2-1", "inference": true, "instance_prompt": "a photo of [v]dog"}
Hibon/dogbooth
null
[ "diffusers", "tensorboard", "safetensors", "text-to-image", "dreambooth", "diffusers-training", "stable-diffusion", "stable-diffusion-diffusers", "base_model:stabilityai/stable-diffusion-2-1", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-17T11:08:48+00:00
[]
[]
TAGS #diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-stabilityai/stable-diffusion-2-1 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# DreamBooth - Hibon/dogbooth This is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using DreamBooth. You can find some example images in the following. DreamBooth for the text encoder was enabled: False. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# DreamBooth - Hibon/dogbooth\n\nThis is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #safetensors #text-to-image #dreambooth #diffusers-training #stable-diffusion #stable-diffusion-diffusers #base_model-stabilityai/stable-diffusion-2-1 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# DreamBooth - Hibon/dogbooth\n\nThis is a dreambooth model derived from stabilityai/stable-diffusion-2-1. The weights were trained on a photo of [v]dog using DreamBooth.\nYou can find some example images in the following. \n\n\n\nDreamBooth for the text encoder was enabled: False.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
question-answering
transformers
# mT5-small based Turkish Question Answering System [Google's Multilingual T5-small](https://huggingface.co/google/mt5-small) is fine-tuned on [Turkish SQuAD](https://github.com/boun-tabi/SQuAD-TR) for **Q&A** downstream task by using Pytorch Lightning. The notebook that includes all fine tuning process will be shared on my Github page later [github](https://github.com/google-research/multilingual-t5). mT5 small model has 300 million parameters and model size is about 1.2GB. Therefore, it takes significant amount of time to fine tune it. ## Usage 🚀 ```python from transformers import T5TokenizerFast, AutoModelForSeq2SeqLM tokenizer = T5TokenizerFast.from_pretrained('google/mt5-small') model = AutoModelForSeq2SeqLM.from_pretrained("anilguven/mt5-small_squad_tr") def get_answer(question,context): input_str = context + " " + question source_encoding=tokenizer( input_str, max_length=512, padding="max_length", truncation="only_second", return_attention_mask=True, add_special_tokens=True, return_tensors="pt") model.to("cpu") generated_ids=model.generate( input_ids=source_encoding["input_ids"], attention_mask=source_encoding["attention_mask"], num_beams=10, num_return_sequences=1, max_length=128) preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for gen_id in generated_ids] return preds #"".join(preds) ``` ### Example 1 ```python question={ "context":"Ingilizce adi 'Normans' Fransizca kelime Normans/Normanz, Normant çogul, modern Fransiz normand, hangi kendisi \ Eski Düsük Frankonian Nortmann 'Northman' ödünç veya dogrudan Eski Norse Norðmaðr, Nortmannus, veya Nordmannus (Ortaçag Latince \ kaydedildi, 9. yüzyil) 'Norseman, Viking' anlamina gelir.", "question":"Norman kelimesinin Latince versiyonu ilk ne zaman kaydedildi?" } get_answer(question["question"],question["context"]) ``` > 9. yüzyil ### Example 2 ```python question={ "context":"Karar sorununa bir örnek asagidaki gibidir. Girdi keyfi bir grafiktir. Sorun, verilen grafigin bagli olup olmadigina \ karar vermekten olusur. Bu karar problemi ile iliskili biçimsel dil daha sonra bagli tüm grafiklerin kümesidir - tabii ki, bu \ dilin kesin bir tanimini elde etmek için, grafiklerin ikili dize olarak nasil kodlandigina karar vermelidir.", "question":"Karar probleminde kullanilan çiktiya bir örnek ne tür bir grafik?" } get_answer(question["question"],question["context"]) ``` > No Answer
{"language": ["tr"], "license": "mit", "tags": ["text-generation", "question-answering", "turkish", "squad", "mt5"], "pipeline_tag": "question-answering"}
anilguven/mt5-small_squad_tr
null
[ "transformers", "safetensors", "mt5", "text2text-generation", "text-generation", "question-answering", "turkish", "squad", "tr", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T11:11:44+00:00
[]
[ "tr" ]
TAGS #transformers #safetensors #mt5 #text2text-generation #text-generation #question-answering #turkish #squad #tr #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# mT5-small based Turkish Question Answering System Google's Multilingual T5-small is fine-tuned on Turkish SQuAD for Q&A downstream task by using Pytorch Lightning. The notebook that includes all fine tuning process will be shared on my Github page later github. mT5 small model has 300 million parameters and model size is about 1.2GB. Therefore, it takes significant amount of time to fine tune it. ## Usage ### Example 1 > 9. yüzyil ### Example 2 > No Answer
[ "# mT5-small based Turkish Question Answering System\n\nGoogle's Multilingual T5-small is fine-tuned on Turkish SQuAD for Q&A downstream task by using Pytorch Lightning.\n\nThe notebook that includes all fine tuning process will be shared on my Github page later github. mT5 small model has 300 million parameters and model size is about 1.2GB. Therefore, it takes significant amount of time to fine tune it.", "## Usage", "### Example 1\n\n> 9. yüzyil", "### Example 2\n\n> No Answer" ]
[ "TAGS\n#transformers #safetensors #mt5 #text2text-generation #text-generation #question-answering #turkish #squad #tr #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# mT5-small based Turkish Question Answering System\n\nGoogle's Multilingual T5-small is fine-tuned on Turkish SQuAD for Q&A downstream task by using Pytorch Lightning.\n\nThe notebook that includes all fine tuning process will be shared on my Github page later github. mT5 small model has 300 million parameters and model size is about 1.2GB. Therefore, it takes significant amount of time to fine tune it.", "## Usage", "### Example 1\n\n> 9. yüzyil", "### Example 2\n\n> No Answer" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [Citaman/command-r-12-layer](https://huggingface.co/Citaman/command-r-12-layer) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Citaman/command-r-12-layer layer_range: [0, 11] - model: Citaman/command-r-12-layer layer_range: [1, 12] merge_method: slerp base_model: Citaman/command-r-12-layer parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Citaman/command-r-12-layer"]}
Citaman/command-r-11-layer
null
[ "transformers", "safetensors", "cohere", "text-generation", "mergekit", "merge", "conversational", "base_model:Citaman/command-r-12-layer", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T11:11:55+00:00
[]
[]
TAGS #transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-12-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * Citaman/command-r-12-layer ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-12-layer", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #cohere #text-generation #mergekit #merge #conversational #base_model-Citaman/command-r-12-layer #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* Citaman/command-r-12-layer", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_usp2_dpo1 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0445 - Rewards/chosen: -10.7928 - Rewards/rejected: -14.2959 - Rewards/accuracies: 0.7400 - Rewards/margins: 3.5031 - Logps/rejected: -250.8612 - Logps/chosen: -215.1325 - Logits/rejected: -0.8799 - Logits/chosen: -0.9353 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.0632 | 2.67 | 100 | 0.6858 | -5.5164 | -7.1903 | 0.6900 | 1.6739 | -179.8054 | -162.3683 | -0.8170 | -0.7196 | | 0.002 | 5.33 | 200 | 0.7615 | -6.7178 | -9.5839 | 0.7600 | 2.8661 | -203.7411 | -174.3826 | -1.0768 | -1.0701 | | 0.0001 | 8.0 | 300 | 1.0247 | -10.4976 | -13.8923 | 0.7400 | 3.3948 | -246.8256 | -212.1801 | -0.8995 | -0.9506 | | 0.0001 | 10.67 | 400 | 1.0323 | -10.6255 | -14.0760 | 0.75 | 3.4505 | -248.6621 | -213.4589 | -0.8910 | -0.9437 | | 0.0001 | 13.33 | 500 | 1.0328 | -10.7107 | -14.1992 | 0.7400 | 3.4885 | -249.8943 | -214.3115 | -0.8858 | -0.9397 | | 0.0001 | 16.0 | 600 | 1.0378 | -10.7577 | -14.2607 | 0.7400 | 3.5030 | -250.5091 | -214.7812 | -0.8823 | -0.9372 | | 0.0 | 18.67 | 700 | 1.0407 | -10.7811 | -14.2886 | 0.75 | 3.5075 | -250.7885 | -215.0155 | -0.8811 | -0.9363 | | 0.0001 | 21.33 | 800 | 1.0415 | -10.7857 | -14.2997 | 0.7400 | 3.5139 | -250.8989 | -215.0617 | -0.8802 | -0.9359 | | 0.0001 | 24.0 | 900 | 1.0423 | -10.7886 | -14.2954 | 0.7400 | 3.5068 | -250.8562 | -215.0906 | -0.8802 | -0.9356 | | 0.0001 | 26.67 | 1000 | 1.0445 | -10.7928 | -14.2959 | 0.7400 | 3.5031 | -250.8612 | -215.1325 | -0.8799 | -0.9353 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_usp2_dpo1", "results": []}]}
guoyu-zhang/model_usp2_dpo1
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
null
2024-04-17T11:12:56+00:00
[]
[]
TAGS #peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us
model\_usp2\_dpo1 ================= This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.0445 * Rewards/chosen: -10.7928 * Rewards/rejected: -14.2959 * Rewards/accuracies: 0.7400 * Rewards/margins: 3.5031 * Logps/rejected: -250.8612 * Logps/chosen: -215.1325 * Logits/rejected: -0.8799 * Logits/chosen: -0.9353 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 4 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 100 * training\_steps: 1000 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: UXAIR/ppo-huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
UXAIR/ppo-huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-04-17T11:15:37+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: UXAIR/ppo-huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: UXAIR/ppo-huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: UXAIR/ppo-huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
null
fastai
# Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
{"tags": ["fastai"]}
ikerzubi/docknet
null
[ "fastai", "has_space", "region:us" ]
null
2024-04-17T11:16:02+00:00
[]
[]
TAGS #fastai #has_space #region-us
# Amazing! Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the documentation here)! 2. Create a demo in Gradio or Streamlit using Spaces (documentation here). 3. Join the fastai community on the Fastai Discord! Greetings fellow fastlearner ! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
[ "# Amazing!\n\n Congratulations on hosting your fastai model on the Hugging Face Hub!", "# Some next steps\n1. Fill out this model card with more information (see the template below and the documentation here)!\n\n2. Create a demo in Gradio or Streamlit using Spaces (documentation here).\n\n3. Join the fastai community on the Fastai Discord!\n\nGreetings fellow fastlearner ! Don't forget to delete this content from your model card.\n\n\n---", "# Model card", "## Model description\nMore information needed", "## Intended uses & limitations\nMore information needed", "## Training and evaluation data\nMore information needed" ]
[ "TAGS\n#fastai #has_space #region-us \n", "# Amazing!\n\n Congratulations on hosting your fastai model on the Hugging Face Hub!", "# Some next steps\n1. Fill out this model card with more information (see the template below and the documentation here)!\n\n2. Create a demo in Gradio or Streamlit using Spaces (documentation here).\n\n3. Join the fastai community on the Fastai Discord!\n\nGreetings fellow fastlearner ! Don't forget to delete this content from your model card.\n\n\n---", "# Model card", "## Model description\nMore information needed", "## Intended uses & limitations\nMore information needed", "## Training and evaluation data\nMore information needed" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [Qwen/Qwen1.5-32B-Chat](https://huggingface.co/Qwen/Qwen1.5-32B-Chat) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Qwen/Qwen1.5-32B-Chat layer_range: [0, 32] # 32 - sources: - model: Qwen/Qwen1.5-32B-Chat layer_range: [16, 48] # 32 - sources: - model: Qwen/Qwen1.5-32B-Chat layer_range: [32, 64] # 32 merge_method: passthrough dtype: float16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Qwen/Qwen1.5-32B-Chat"]}
zypcastles/Qwen1.5-48B-Chat
null
[ "transformers", "safetensors", "qwen2", "text-generation", "mergekit", "merge", "conversational", "base_model:Qwen/Qwen1.5-32B-Chat", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T11:16:12+00:00
[]
[]
TAGS #transformers #safetensors #qwen2 #text-generation #mergekit #merge #conversational #base_model-Qwen/Qwen1.5-32B-Chat #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * Qwen/Qwen1.5-32B-Chat ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* Qwen/Qwen1.5-32B-Chat", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #qwen2 #text-generation #mergekit #merge #conversational #base_model-Qwen/Qwen1.5-32B-Chat #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* Qwen/Qwen1.5-32B-Chat", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
dp911/phituned3
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T11:17:41+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Trubnik1967/zephyr-7b-beta-Agent-Instruct_v3
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T11:21:51+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dummy-model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.39.3 - TensorFlow 2.16.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "bert-base-uncased", "model-index": [{"name": "dummy-model", "results": []}]}
bluspark/dummy-model
null
[ "transformers", "tf", "bert", "text-classification", "generated_from_keras_callback", "base_model:bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T11:22:02+00:00
[]
[]
TAGS #transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# dummy-model This model is a fine-tuned version of bert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.39.3 - TensorFlow 2.16.1 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# dummy-model\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- TensorFlow 2.16.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tf #bert #text-classification #generated_from_keras_callback #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# dummy-model\n\nThis model is a fine-tuned version of bert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- optimizer: None\n- training_precision: float32", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- TensorFlow 2.16.1\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
adediu25/implicit-llama2-all
null
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-17T11:22:42+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #trl #sft #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Uploaded model - **Developed by:** ranggaaldosas - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-2b-bnb-4bit"}
ranggaaldosas/lora_model_gemma2b
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-2b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-17T11:23:51+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: ranggaaldosas - License: apache-2.0 - Finetuned from model : unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: ranggaaldosas\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #gemma #trl #en #base_model-unsloth/gemma-2b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: ranggaaldosas\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/allknowingroger/CeptrixBeagle-12B-MoE <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.IQ3_XS.gguf) | IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q3_K_S.gguf) | Q3_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.IQ3_S.gguf) | IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.IQ3_M.gguf) | IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q3_K_M.gguf) | Q3_K_M | 6.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q3_K_L.gguf) | Q3_K_L | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.IQ4_XS.gguf) | IQ4_XS | 7.1 | | | [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q4_K_S.gguf) | Q4_K_S | 7.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q4_K_M.gguf) | Q4_K_M | 7.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q5_K_S.gguf) | Q5_K_S | 9.0 | | | [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q5_K_M.gguf) | Q5_K_M | 9.2 | | | [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q6_K.gguf) | Q6_K | 10.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/CeptrixBeagle-12B-MoE-GGUF/resolve/main/CeptrixBeagle-12B-MoE.Q8_0.gguf) | Q8_0 | 13.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "allknowingroger/NeuralCeptrix-7B-slerp", "paulml/OmniBeagleSquaredMBX-v3-7B"], "base_model": "allknowingroger/CeptrixBeagle-12B-MoE", "quantized_by": "mradermacher"}
mradermacher/CeptrixBeagle-12B-MoE-GGUF
null
[ "transformers", "gguf", "moe", "frankenmoe", "merge", "mergekit", "lazymergekit", "allknowingroger/NeuralCeptrix-7B-slerp", "paulml/OmniBeagleSquaredMBX-v3-7B", "en", "base_model:allknowingroger/CeptrixBeagle-12B-MoE", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-17T11:24:00+00:00
[]
[ "en" ]
TAGS #transformers #gguf #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/NeuralCeptrix-7B-slerp #paulml/OmniBeagleSquaredMBX-v3-7B #en #base_model-allknowingroger/CeptrixBeagle-12B-MoE #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #moe #frankenmoe #merge #mergekit #lazymergekit #allknowingroger/NeuralCeptrix-7B-slerp #paulml/OmniBeagleSquaredMBX-v3-7B #en #base_model-allknowingroger/CeptrixBeagle-12B-MoE #license-apache-2.0 #endpoints_compatible #region-us \n" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
TYZY89/Llama2-7b-dpo
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T11:24:57+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # trained_weights This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0042 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.4519 | 1.0 | 1250 | 1.0075 | | 0.3817 | 2.0 | 2500 | 1.0042 | ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "trained_weights", "results": []}]}
adediu25/trained_weights
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
null
2024-04-17T11:25:13+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us
trained\_weights ================ This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.0042 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 2 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 8 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.03 * num\_epochs: 2 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 2\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Marcoro14-7B-ties Marcoro14-7B-ties is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) ## 🧩 Configuration ```yaml models: - model: mistralai/Mistral-7B-v0.1 # no parameters necessary for base model - model: OpenPipe/mistral-ft-optimized-1218 parameters: density: 0.5 weight: 0.5 - model: mlabonne/NeuralHermes-2.5-Mistral-7B parameters: density: 0.5 weight: 0.3 merge_method: ties base_model: mistralai/Mistral-7B-v0.1 parameters: normalize: true dtype: float16 ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B"]}
pwei07/Marcoro14-7B-ties
null
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T11:25:18+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #OpenPipe/mistral-ft-optimized-1218 #mlabonne/NeuralHermes-2.5-Mistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Marcoro14-7B-ties Marcoro14-7B-ties is a merge of the following models using mergekit: * OpenPipe/mistral-ft-optimized-1218 * mlabonne/NeuralHermes-2.5-Mistral-7B ## Configuration
[ "# Marcoro14-7B-ties\n\nMarcoro14-7B-ties is a merge of the following models using mergekit:\n* OpenPipe/mistral-ft-optimized-1218\n* mlabonne/NeuralHermes-2.5-Mistral-7B", "## Configuration" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #OpenPipe/mistral-ft-optimized-1218 #mlabonne/NeuralHermes-2.5-Mistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Marcoro14-7B-ties\n\nMarcoro14-7B-ties is a merge of the following models using mergekit:\n* OpenPipe/mistral-ft-optimized-1218\n* mlabonne/NeuralHermes-2.5-Mistral-7B", "## Configuration" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # convnext-large-384-22k-1k-Kontur-competition This model is a fine-tuned version of [facebook/convnext-large-384-22k-1k](https://huggingface.co/facebook/convnext-large-384-22k-1k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0020 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 48 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0003 | 1.0 | 717 | 0.0073 | | 0.0035 | 2.0 | 1434 | 0.0050 | | 0.0 | 3.0 | 2151 | 0.0020 | | 0.0 | 4.0 | 2869 | 0.0000 | | 0.0 | 5.0 | 3585 | 0.0020 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "base_model": "facebook/convnext-large-384-22k-1k", "model-index": [{"name": "convnext-large-384-22k-1k-Kontur-competition", "results": []}]}
t1msan/convnext-large-384-22k-1k-Kontur-competition
null
[ "transformers", "tensorboard", "safetensors", "convnext", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:facebook/convnext-large-384-22k-1k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T11:26:12+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #convnext #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/convnext-large-384-22k-1k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
convnext-large-384-22k-1k-Kontur-competition ============================================ This model is a fine-tuned version of facebook/convnext-large-384-22k-1k on the imagefolder dataset. It achieves the following results on the evaluation set: * Loss: 0.0020 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-05 * train\_batch\_size: 12 * eval\_batch\_size: 12 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 48 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 48\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #convnext #image-classification #generated_from_trainer #dataset-imagefolder #base_model-facebook/convnext-large-384-22k-1k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 48\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: filodoxia/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy"]}
filodoxia/ppo-Huggy
null
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
null
2024-04-17T11:26:55+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us
# ppo Agent playing Huggy This is a trained model of a ppo agent playing Huggy using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: filodoxia/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: filodoxia/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Huggy #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Huggy #region-us \n", "# ppo Agent playing Huggy\n This is a trained model of a ppo agent playing Huggy\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: filodoxia/ppo-Huggy\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
text-generation
transformers
# Uploaded model - **Developed by:** ranggaaldosas - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-2b-bnb-4bit"}
ranggaaldosas/simple_gemma2b
null
[ "transformers", "pytorch", "gemma", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "base_model:unsloth/gemma-2b-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T11:29:25+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #gemma #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/gemma-2b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: ranggaaldosas - License: apache-2.0 - Finetuned from model : unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: ranggaaldosas\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #pytorch #gemma #text-generation #text-generation-inference #unsloth #trl #conversational #en #base_model-unsloth/gemma-2b-bnb-4bit #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: ranggaaldosas\n- License: apache-2.0\n- Finetuned from model : unsloth/gemma-2b-bnb-4bit\n\nThis gemma model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_hh_usp2_400 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.1146 - Rewards/chosen: -10.1910 - Rewards/rejected: -12.8552 - Rewards/accuracies: 0.5700 - Rewards/margins: 2.6642 - Logps/rejected: -130.0886 - Logps/chosen: -125.3267 - Logits/rejected: 0.1734 - Logits/chosen: 0.1410 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.0066 | 4.0 | 100 | 2.4563 | -5.8278 | -7.6948 | 0.5900 | 1.8670 | -124.3548 | -120.4786 | -0.0743 | -0.0887 | | 0.042 | 8.0 | 200 | 2.8011 | -2.8779 | -4.7588 | 0.5300 | 1.8808 | -121.0926 | -117.2011 | 0.3427 | 0.3247 | | 0.0009 | 12.0 | 300 | 3.2063 | -16.1959 | -19.1144 | 0.5500 | 2.9186 | -137.0433 | -131.9988 | 0.1998 | 0.1756 | | 0.0001 | 16.0 | 400 | 3.1047 | -10.1343 | -12.7872 | 0.5800 | 2.6529 | -130.0131 | -125.2637 | 0.1757 | 0.1437 | | 0.0 | 20.0 | 500 | 3.1359 | -10.1980 | -12.8447 | 0.5800 | 2.6467 | -130.0769 | -125.3345 | 0.1736 | 0.1412 | | 0.0 | 24.0 | 600 | 3.1186 | -10.1842 | -12.8467 | 0.5800 | 2.6625 | -130.0792 | -125.3191 | 0.1732 | 0.1409 | | 0.0 | 28.0 | 700 | 3.1174 | -10.2101 | -12.8729 | 0.5900 | 2.6628 | -130.1082 | -125.3479 | 0.1733 | 0.1406 | | 0.0 | 32.0 | 800 | 3.1257 | -10.1973 | -12.8683 | 0.5900 | 2.6711 | -130.1032 | -125.3336 | 0.1735 | 0.1409 | | 0.0 | 36.0 | 900 | 3.1112 | -10.1620 | -12.8766 | 0.5800 | 2.7147 | -130.1124 | -125.2944 | 0.1735 | 0.1413 | | 0.0 | 40.0 | 1000 | 3.1146 | -10.1910 | -12.8552 | 0.5700 | 2.6642 | -130.0886 | -125.3267 | 0.1734 | 0.1410 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_usp2_400", "results": []}]}
guoyu-zhang/model_hh_usp2_400
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
null
2024-04-17T11:31:02+00:00
[]
[]
TAGS #peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us
model\_hh\_usp2\_400 ==================== This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 3.1146 * Rewards/chosen: -10.1910 * Rewards/rejected: -12.8552 * Rewards/accuracies: 0.5700 * Rewards/margins: 2.6642 * Logps/rejected: -130.0886 * Logps/chosen: -125.3267 * Logits/rejected: 0.1734 * Logits/chosen: 0.1410 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 4 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 100 * training\_steps: 1000 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [amazingvince/Not-WizardLM-2-7B](https://huggingface.co/amazingvince/Not-WizardLM-2-7B) as a base. ### Models Merged The following models were included in the merge: * [Elizezen/Sapphire-7B](https://huggingface.co/Elizezen/Sapphire-7B) * [Elizezen/Antler-7B](https://huggingface.co/Elizezen/Antler-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: amazingvince/Not-WizardLM-2-7B #no parameters necessary for base model - model: Elizezen/Sapphire-7B parameters: density: 0.5 weight: 0.5 - model: Elizezen/Antler-7B parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: amazingvince/Not-WizardLM-2-7B parameters: normalize: false int8_mask: true dtype: float16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["Elizezen/Sapphire-7B", "amazingvince/Not-WizardLM-2-7B", "Elizezen/Antler-7B"]}
Exveria/mergetest02
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "arxiv:2306.01708", "base_model:Elizezen/Sapphire-7B", "base_model:amazingvince/Not-WizardLM-2-7B", "base_model:Elizezen/Antler-7B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T11:33:53+00:00
[ "2306.01708" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-Elizezen/Sapphire-7B #base_model-amazingvince/Not-WizardLM-2-7B #base_model-Elizezen/Antler-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# merge This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the TIES merge method using amazingvince/Not-WizardLM-2-7B as a base. ### Models Merged The following models were included in the merge: * Elizezen/Sapphire-7B * Elizezen/Antler-7B ### Configuration The following YAML configuration was used to produce this model:
[ "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the TIES merge method using amazingvince/Not-WizardLM-2-7B as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* Elizezen/Sapphire-7B\n* Elizezen/Antler-7B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #arxiv-2306.01708 #base_model-Elizezen/Sapphire-7B #base_model-amazingvince/Not-WizardLM-2-7B #base_model-Elizezen/Antler-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# merge\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the TIES merge method using amazingvince/Not-WizardLM-2-7B as a base.", "### Models Merged\n\nThe following models were included in the merge:\n* Elizezen/Sapphire-7B\n* Elizezen/Antler-7B", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # falcon7binstruct_mentalhealthmodel_oct23 This model is a fine-tuned version of [vilsonrodrigues/falcon-7b-instruct-sharded](https://huggingface.co/vilsonrodrigues/falcon-7b-instruct-sharded) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "vilsonrodrigues/falcon-7b-instruct-sharded", "model-index": [{"name": "falcon7binstruct_mentalhealthmodel_oct23", "results": []}]}
ckhjilfweqhgih/falcon7binstruct_mentalhealthmodel_oct23
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:vilsonrodrigues/falcon-7b-instruct-sharded", "license:apache-2.0", "region:us" ]
null
2024-04-17T11:34:03+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-vilsonrodrigues/falcon-7b-instruct-sharded #license-apache-2.0 #region-us
# falcon7binstruct_mentalhealthmodel_oct23 This model is a fine-tuned version of vilsonrodrigues/falcon-7b-instruct-sharded on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# falcon7binstruct_mentalhealthmodel_oct23\n\nThis model is a fine-tuned version of vilsonrodrigues/falcon-7b-instruct-sharded on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 30\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-vilsonrodrigues/falcon-7b-instruct-sharded #license-apache-2.0 #region-us \n", "# falcon7binstruct_mentalhealthmodel_oct23\n\nThis model is a fine-tuned version of vilsonrodrigues/falcon-7b-instruct-sharded on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 30\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.39.3\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
Test model. Under testing... Recipe: ```yaml base_model: /content/InfinityRP gate_mode: random dtype: bfloat16 # output dtype (float32, float16, or bfloat16) ## (optional) experts_per_token: 2 experts: - source_model: /content/Aurav2 positive_prompts: [] - source_model: /content/Spice positive_prompts: [] - source_model: /content/InfinityRP positive_prompts: [] - source_model: /content/DaCo positive_prompts: [] ```
{"language": ["en"], "license": "apache-2.0", "tags": ["safetensors", "mixtral"]}
R136a1/BeyondInfinity-v2-4x7B
null
[ "transformers", "safetensors", "mixtral", "text-generation", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T11:35:09+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mixtral #text-generation #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
Test model. Under testing... Recipe:
[]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-leg-al-perplexity This model is a fine-tuned version of [PlanTL-GOB-ES/RoBERTalex](https://huggingface.co/PlanTL-GOB-ES/RoBERTalex) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2238 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-06 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 63 | 2.3119 | | No log | 2.0 | 126 | 2.1746 | | No log | 3.0 | 189 | 2.1621 | | No log | 4.0 | 252 | 2.2585 | | No log | 5.0 | 315 | 2.2080 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "PlanTL-GOB-ES/RoBERTalex", "model-index": [{"name": "bert-leg-al-perplexity", "results": []}]}
desarrolloasesoreslocales/bert-leg-al-perplexity
null
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:PlanTL-GOB-ES/RoBERTalex", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T11:35:17+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-PlanTL-GOB-ES/RoBERTalex #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bert-leg-al-perplexity ====================== This model is a fine-tuned version of PlanTL-GOB-ES/RoBERTalex on the None dataset. It achieves the following results on the evaluation set: * Loss: 2.2238 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-06 * train\_batch\_size: 16 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-06\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-PlanTL-GOB-ES/RoBERTalex #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-06\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Boundary-mistral-4x7b-MoE Boundary-mistral-4x7b-MoE is a Mixture of Experts (MoE) made with the following models: * [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) * [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) * [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) * [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) ## 🧩 Configuration ```yaml base_model: mistralai/Mistral-7B-Instruct-v0.2 dtype: float16 gate_mode: cheap_embed experts: - source_model: HuggingFaceH4/zephyr-7b-beta positive_prompts: ["You are an helpful general-pupose assistant."] - source_model: mistralai/Mistral-7B-Instruct-v0.2 positive_prompts: ["You are helpful assistant."] - source_model: teknium/OpenHermes-2.5-Mistral-7B positive_prompts: ["You are helpful a coding assistant."] - source_model: meta-math/MetaMath-Mistral-7B positive_prompts: ["You are an assistant good at math."] ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "NotAiLOL/Boundary-mistral-4x7b-MoE" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["moe", "merge", "mergekit", "HuggingFaceH4/zephyr-7b-beta", "mistralai/Mistral-7B-Instruct-v0.2", "teknium/OpenHermes-2.5-Mistral-7B", "meta-math/MetaMath-Mistral-7B", "Mistral"], "base_model": ["HuggingFaceH4/zephyr-7b-beta", "mistralai/Mistral-7B-Instruct-v0.2", "teknium/OpenHermes-2.5-Mistral-7B", "meta-math/MetaMath-Mistral-7B"]}
NotAiLOL/Boundary-mistral-4x7b-MoE
null
[ "transformers", "safetensors", "mixtral", "text-generation", "moe", "merge", "mergekit", "HuggingFaceH4/zephyr-7b-beta", "mistralai/Mistral-7B-Instruct-v0.2", "teknium/OpenHermes-2.5-Mistral-7B", "meta-math/MetaMath-Mistral-7B", "Mistral", "conversational", "base_model:HuggingFaceH4/zephyr-7b-beta", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "base_model:teknium/OpenHermes-2.5-Mistral-7B", "base_model:meta-math/MetaMath-Mistral-7B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T11:36:00+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #moe #merge #mergekit #HuggingFaceH4/zephyr-7b-beta #mistralai/Mistral-7B-Instruct-v0.2 #teknium/OpenHermes-2.5-Mistral-7B #meta-math/MetaMath-Mistral-7B #Mistral #conversational #base_model-HuggingFaceH4/zephyr-7b-beta #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-teknium/OpenHermes-2.5-Mistral-7B #base_model-meta-math/MetaMath-Mistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Boundary-mistral-4x7b-MoE Boundary-mistral-4x7b-MoE is a Mixture of Experts (MoE) made with the following models: * HuggingFaceH4/zephyr-7b-beta * mistralai/Mistral-7B-Instruct-v0.2 * teknium/OpenHermes-2.5-Mistral-7B * meta-math/MetaMath-Mistral-7B ## Configuration ## Usage
[ "# Boundary-mistral-4x7b-MoE\n\nBoundary-mistral-4x7b-MoE is a Mixture of Experts (MoE) made with the following models:\n* HuggingFaceH4/zephyr-7b-beta\n* mistralai/Mistral-7B-Instruct-v0.2\n* teknium/OpenHermes-2.5-Mistral-7B\n* meta-math/MetaMath-Mistral-7B", "## Configuration", "## Usage" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #moe #merge #mergekit #HuggingFaceH4/zephyr-7b-beta #mistralai/Mistral-7B-Instruct-v0.2 #teknium/OpenHermes-2.5-Mistral-7B #meta-math/MetaMath-Mistral-7B #Mistral #conversational #base_model-HuggingFaceH4/zephyr-7b-beta #base_model-mistralai/Mistral-7B-Instruct-v0.2 #base_model-teknium/OpenHermes-2.5-Mistral-7B #base_model-meta-math/MetaMath-Mistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Boundary-mistral-4x7b-MoE\n\nBoundary-mistral-4x7b-MoE is a Mixture of Experts (MoE) made with the following models:\n* HuggingFaceH4/zephyr-7b-beta\n* mistralai/Mistral-7B-Instruct-v0.2\n* teknium/OpenHermes-2.5-Mistral-7B\n* meta-math/MetaMath-Mistral-7B", "## Configuration", "## Usage" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CS505-Dev-CSI-PhoBERT_base_h2 This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "base_model": "vinai/phobert-base", "model-index": [{"name": "CS505-Dev-CSI-PhoBERT_base_h2", "results": []}]}
ThuyNT/CS505-Dev-CSI-PhoBERT_base_h2
null
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:vinai/phobert-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T11:38:01+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-vinai/phobert-base #autotrain_compatible #endpoints_compatible #region-us
# CS505-Dev-CSI-PhoBERT_base_h2 This model is a fine-tuned version of vinai/phobert-base on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# CS505-Dev-CSI-PhoBERT_base_h2\n\nThis model is a fine-tuned version of vinai/phobert-base on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-vinai/phobert-base #autotrain_compatible #endpoints_compatible #region-us \n", "# CS505-Dev-CSI-PhoBERT_base_h2\n\nThis model is a fine-tuned version of vinai/phobert-base on the None dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 64\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 15", "### Training results", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
samehfarouk/quantized_Mistral7B_v2_int8
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-17T11:39:06+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Trial1-phi2 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "microsoft/phi-2", "model-index": [{"name": "Trial1-phi2", "results": []}]}
krishnakekan01/Trial1-phi2
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-04-17T11:41:05+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/phi-2 #license-mit #region-us
# Trial1-phi2 This model is a fine-tuned version of microsoft/phi-2 on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# Trial1-phi2\n\nThis model is a fine-tuned version of microsoft/phi-2 on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-microsoft/phi-2 #license-mit #region-us \n", "# Trial1-phi2\n\nThis model is a fine-tuned version of microsoft/phi-2 on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.0\n- Transformers 4.40.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
question-answering
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nithin666/bert-finetuned-squad-5epoch-og This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.2594 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 27730, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.2594 | 0 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "bert-base-cased", "model-index": [{"name": "nithin666/bert-finetuned-squad-5epoch-og", "results": []}]}
nithin666/bert-finetuned-squad-5epoch-og
null
[ "transformers", "tf", "bert", "question-answering", "generated_from_keras_callback", "base_model:bert-base-cased", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-17T11:44:50+00:00
[]
[]
TAGS #transformers #tf #bert #question-answering #generated_from_keras_callback #base_model-bert-base-cased #license-apache-2.0 #endpoints_compatible #region-us
nithin666/bert-finetuned-squad-5epoch-og ======================================== This model is a fine-tuned version of bert-base-cased on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 1.2594 * Epoch: 0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 2e-05, 'decay\_steps': 27730, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.01} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.38.2 * TensorFlow 2.15.0 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 27730, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tf #bert #question-answering #generated_from_keras_callback #base_model-bert-base-cased #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 2e-05, 'decay\\_steps': 27730, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]