Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
GamblerOnTrain/CVDX-570
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:07:11+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
KingMrock/trained-tokenizer
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:07:13+00:00
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
KoonJamesZ/thai-sentence-transformers-disab-v1
null
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:07:18+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) zephyr-speakleash-010-pl-3072-32-16-0.01 - bnb 4bits - Model creator: https://huggingface.co/Nondzu/ - Original model: https://huggingface.co/Nondzu/zephyr-speakleash-010-pl-3072-32-16-0.01/ Original model description: --- license: mit --- [speakleash.org](https://speakleash.org) ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```
{}
RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-4bits
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-03T09:07:19+00:00
null
null
{}
Kudod/checkpoint-3416-final
null
[ "region:us" ]
null
2024-05-03T09:08:17+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
vanisus/abiturientSSTU_test
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:08:23+00:00
reinforcement-learning
stable-baselines3
# **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.18 +/- 0.10", "name": "mean_reward", "verified": false}]}]}]}
lzacchini/a2c-PandaReachDense-v3
null
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-05-03T09:08:54+00:00
text-classification
transformers
{}
yzzh/gemma2b_v4
null
[ "transformers", "safetensors", "gemma", "text-classification", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:09:29+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
EpicJhon/l3-6
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:10:17+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
EpicJhon/l3-4
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:10:18+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
EpicJhon/l3-7
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:10:19+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/x8trrap
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:10:23+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) APT3-1B-Base - bnb 4bits - Model creator: https://huggingface.co/Azurro/ - Original model: https://huggingface.co/Azurro/APT3-1B-Base/ Original model description: --- license: cc-by-nc-4.0 datasets: - chrisociepa/wikipedia-pl-20230401 language: - pl library_name: transformers tags: - llama - ALLaMo inference: false --- # APT3-1B-Base ## Introduction At [Azurro](https://azurro.pl), we consistently place importance on using the Open Source technologies, both while working on the projects and in our everyday lives. We have decided to share a base language model trained by us. We are confident that smaller language models have great potential, and direct access to them for all people that are interested in such models democratizes this significant and dynamically changing field even more. ## Statements Training large language models requires a lot of computing power and it is meant for the major players on the market. However, does it mean that individuals or small companies cannot train language models capable of performing specific tasks? We decided to answer this question and train our own language model from scratch. We have made the following statements: * we use 1 consumer graphic card * we train the model only with the Polish corpus * we use manually selected, high quality texts for training the model. Why have we made such statements? It is worth noting that training a model requires several times more resources than using it. To put it simply, it can be assumed that it is about 3-4 times more. Therefore, if a model can be run with a graphic card that has 6 GB VRAM, then training this model requires about 24 GB VRAM (this is the minimum value). Many consumer computers are equipped with good quality graphic cards that can be used for training a model at one’s own home. This is why we have decided to use a top consumer graphic card - Nvidia’s RTX 4090 24GB VRAM. All the currently available language models have been trained mainly with English corpora with a little bit of other languages, including Polish. The effect is that these models are not the best at dealing with the Polish texts. Even the popular GPT models from OpenAI and Bard from Google often have issues with correct forms. Therefore we have decided to prepare a model based only on the Polish corpus. An additional advantage of using only the Polish corpus is the size of the model - it is better to focus on one language in the case of smaller models. It is important to remember that models are only as good as the data with which they are trained. Given the small size of the model, we trained it with carefully selected texts. This is why we have not used corpora such as Common Crawl that contain a lot of poor-quality data. With close collaboration and advice from the [Speakleash](https://speakleash.org) team, our team has prepared over 285GB of Polish language text corpus that has then been processed and used for training the model. Additionally, the unique feature of our model is that it has been trained on the largest amount of text among all available models for the Polish language. ## Model APT3-1B-Base has been trained with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo). This framework allows the user to train language models similar to the Meta AI’s LLaMA models quickly and efficiently. APT3-1B-Base is an autoregressive language model based on the architecture of a transformer. It has been trained with data collected before the end of December 2023. The training dataset (the Polish corpus) has over 60 billion tokens, and we use all of them for training with one epoch. A special tokenizer has been prepared and trained for the purpose of training the models in the APT3 series. ### Model description: * **Developed by:** [Azurro](https://azurro.pl) * **Language:** Polish * **Model type:** causal decoder-only * **License:** CC BY NC 4.0 (non-commercial use) ### Model details: | **Hyperparameter** | **Value** | |--------------------|-------------| | Model Parameters | 1041M | | Sequence Length | 2048 | | Vocabulary Size | 31980 | | Layers | 18 | | Heads | 32 | | d_head | 64 | | d_model | 2048 | | Dropout | 0.0 | | Bias | No | | Positional Encoding | RoPE | | Activation Function | SwiGLU | | Normalizing Function | RMSNorm | | Intermediate Size | 5504 | | Norm Epsilon | 1e-06 | ### Tokenizer details: * type: BPE * special tokens: 8 (`<unk>`, `<s>`, `</s>`, `<pad>`, `[INST]`, `[/INST]`, `<<SYS>>`, `<</SYS>>`) * alphabet size: 113 * vocabulary size: 31980 ## Training * Framework: [ALLaMo](https://github.com/chrisociepa/allamo) * Visualizations: [W&B](https://wandb.ai) <p align="center"> <img src="https://huggingface.co/Azurro/APT3-1B-Base/raw/main/apt3-1b-base-train.jpg"> </p> <p align="center"> <img src="https://huggingface.co/Azurro/APT3-1B-Base/raw/main/apt3-1b-base-eval.jpg"> </p> ### Training hyperparameters: | **Hyperparameter** | **Value** | |-----------------------------|------------------| | Micro Batch Size | 1 | | Gradient Accumulation Steps | 1024 | | Batch Size | 2097152 | | Learning Rate (cosine) | 2e-04 -> 2e-05 | | Warmup Iterations | 1000 | | All Iterations | 28900 | | Optimizer | AdamW | | Ξ²1, Ξ²2 | 0.9, 0.95 | | Adam_eps | 1eβˆ’8 | | Weight Decay | 0.1 | | Grad Clip | 1.0 | | Precision | bfloat16 | ### Dataset Collecting a large amount of high quality training data is a great challenge. Over the past years at Azurro, we have done a lot of projects connected with processing Big Data. Therefore, with our extensive experience, we have been able to prepare carefully selected training dataset quickly and efficiently. Our close collaboration with the Speakleash team has resulted in the creation of over 285GB of the Polish language text corpus. The process of preparing the training dataset involved transforming documents by applying various cleaning and repairing rules, followed by selecting documents of appropriate quality. Our training dataset contains: * 150 datasets from [Speakleash](https://speakleash.org) - 93% * other publicly available and crawled web data - 6% * Polish Wikipedia - 1% ### Quickstart This model can be easily loaded using the AutoModelForCausalLM functionality. ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "Azurro/APT3-1B-Base" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ``` In order to reduce the memory usage, you can use smaller precision (`bfloat16`). ```python import torch model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) ``` And then you can use Hugging Face Pipelines to generate text: ```python import transformers text = "NajwaΕΌniejszym celem czΕ‚owieka na ziemi jest" pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer) sequences = pipeline(max_new_tokens=100, do_sample=True, top_k=50, eos_token_id=tokenizer.eos_token_id, text_inputs=text) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` Generated output: > NajwaΕΌniejszym celem czΕ‚owieka na ziemi jest ΕΌycie w pokoju, harmonii i miΕ‚oΕ›ci. Dla kaΕΌdego z nas bardzo waΕΌne jest, aby otaczaΔ‡ siΔ™ kochanymi osobami. ## Limitations and Biases APT3-1B-Base is not intended for deployment without fine-tuning. It should not be used for human-facing interactions without further guardrails and user consent. APT3-1B-Base can produce factually incorrect output, and should not be relied on to produce factually accurate information. APT3-1B-Base was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## License Because of an unclear legal situation, we have decided to publish the model under CC BY NC 4.0 license - it allows for non-commercial use. The model can be used for scientific purposes and privately, as long as the license conditions are met. ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. ## Citation Please cite this model using the following format: ``` @online{AzurroAPT3Base1B, author = {Krzysztof Ociepa, Azurro}, title = {Introducing APT3-1B-Base: Polish Language Model}, year = {2024}, url = {www.azurro.pl/apt3-1b-base-en}, note = {Accessed: 2024-01-04}, % change this date urldate = {2024-01-04} % change this date } ``` ## Special thanks We would like to especially thank the [Speakleash](https://speakleash.org) team for collecting and sharing texts in Polish, and for the support we could always count on while preparing the training set for our model. Without you, it would not have been possible to train this model. Thank you! ## The Azurro Team Please find more information on the Azurro [homepage](https://azurro.pl). ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, drop an email to [[email protected]](mailto:[email protected]).
{}
RichardErkhov/Azurro_-_APT3-1B-Base-4bits
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-03T09:10:46+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/x8uwa4s
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:11:06+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
twodigit/Meta-Llama-3-8B-Instruct-koconv2_4327k-sft-lora-230000
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:11:33+00:00
null
mlx
# batmac/Hermes-2-Pro-Llama-3-8B-mlx-4bit This model was converted to MLX format from [`NousResearch/Hermes-2-Pro-Llama-3-8B`]() using mlx-lm version **0.12.1**. Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) for more details on the model. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("batmac/Hermes-2-Pro-Llama-3-8B-mlx-4bit") response = generate(model, tokenizer, prompt="hello", verbose=True) ```
{"language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "mlx"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "NousResearch/Meta-Llama-3-8B", "widget": [{"example_title": "Hermes 2 Pro", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "model-index": [{"name": "Hermes-2-Pro-Llama-3-8B", "results": []}]}
batmac/Hermes-2-Pro-Llama-3-8B-mlx-4bit
null
[ "mlx", "safetensors", "llama", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "en", "dataset:teknium/OpenHermes-2.5", "base_model:NousResearch/Meta-Llama-3-8B", "license:apache-2.0", "region:us" ]
null
2024-05-03T09:12:05+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) APT3-1B-Base - bnb 8bits - Model creator: https://huggingface.co/Azurro/ - Original model: https://huggingface.co/Azurro/APT3-1B-Base/ Original model description: --- license: cc-by-nc-4.0 datasets: - chrisociepa/wikipedia-pl-20230401 language: - pl library_name: transformers tags: - llama - ALLaMo inference: false --- # APT3-1B-Base ## Introduction At [Azurro](https://azurro.pl), we consistently place importance on using the Open Source technologies, both while working on the projects and in our everyday lives. We have decided to share a base language model trained by us. We are confident that smaller language models have great potential, and direct access to them for all people that are interested in such models democratizes this significant and dynamically changing field even more. ## Statements Training large language models requires a lot of computing power and it is meant for the major players on the market. However, does it mean that individuals or small companies cannot train language models capable of performing specific tasks? We decided to answer this question and train our own language model from scratch. We have made the following statements: * we use 1 consumer graphic card * we train the model only with the Polish corpus * we use manually selected, high quality texts for training the model. Why have we made such statements? It is worth noting that training a model requires several times more resources than using it. To put it simply, it can be assumed that it is about 3-4 times more. Therefore, if a model can be run with a graphic card that has 6 GB VRAM, then training this model requires about 24 GB VRAM (this is the minimum value). Many consumer computers are equipped with good quality graphic cards that can be used for training a model at one’s own home. This is why we have decided to use a top consumer graphic card - Nvidia’s RTX 4090 24GB VRAM. All the currently available language models have been trained mainly with English corpora with a little bit of other languages, including Polish. The effect is that these models are not the best at dealing with the Polish texts. Even the popular GPT models from OpenAI and Bard from Google often have issues with correct forms. Therefore we have decided to prepare a model based only on the Polish corpus. An additional advantage of using only the Polish corpus is the size of the model - it is better to focus on one language in the case of smaller models. It is important to remember that models are only as good as the data with which they are trained. Given the small size of the model, we trained it with carefully selected texts. This is why we have not used corpora such as Common Crawl that contain a lot of poor-quality data. With close collaboration and advice from the [Speakleash](https://speakleash.org) team, our team has prepared over 285GB of Polish language text corpus that has then been processed and used for training the model. Additionally, the unique feature of our model is that it has been trained on the largest amount of text among all available models for the Polish language. ## Model APT3-1B-Base has been trained with the use of an original open source framework called [ALLaMo](https://github.com/chrisociepa/allamo). This framework allows the user to train language models similar to the Meta AI’s LLaMA models quickly and efficiently. APT3-1B-Base is an autoregressive language model based on the architecture of a transformer. It has been trained with data collected before the end of December 2023. The training dataset (the Polish corpus) has over 60 billion tokens, and we use all of them for training with one epoch. A special tokenizer has been prepared and trained for the purpose of training the models in the APT3 series. ### Model description: * **Developed by:** [Azurro](https://azurro.pl) * **Language:** Polish * **Model type:** causal decoder-only * **License:** CC BY NC 4.0 (non-commercial use) ### Model details: | **Hyperparameter** | **Value** | |--------------------|-------------| | Model Parameters | 1041M | | Sequence Length | 2048 | | Vocabulary Size | 31980 | | Layers | 18 | | Heads | 32 | | d_head | 64 | | d_model | 2048 | | Dropout | 0.0 | | Bias | No | | Positional Encoding | RoPE | | Activation Function | SwiGLU | | Normalizing Function | RMSNorm | | Intermediate Size | 5504 | | Norm Epsilon | 1e-06 | ### Tokenizer details: * type: BPE * special tokens: 8 (`<unk>`, `<s>`, `</s>`, `<pad>`, `[INST]`, `[/INST]`, `<<SYS>>`, `<</SYS>>`) * alphabet size: 113 * vocabulary size: 31980 ## Training * Framework: [ALLaMo](https://github.com/chrisociepa/allamo) * Visualizations: [W&B](https://wandb.ai) <p align="center"> <img src="https://huggingface.co/Azurro/APT3-1B-Base/raw/main/apt3-1b-base-train.jpg"> </p> <p align="center"> <img src="https://huggingface.co/Azurro/APT3-1B-Base/raw/main/apt3-1b-base-eval.jpg"> </p> ### Training hyperparameters: | **Hyperparameter** | **Value** | |-----------------------------|------------------| | Micro Batch Size | 1 | | Gradient Accumulation Steps | 1024 | | Batch Size | 2097152 | | Learning Rate (cosine) | 2e-04 -> 2e-05 | | Warmup Iterations | 1000 | | All Iterations | 28900 | | Optimizer | AdamW | | Ξ²1, Ξ²2 | 0.9, 0.95 | | Adam_eps | 1eβˆ’8 | | Weight Decay | 0.1 | | Grad Clip | 1.0 | | Precision | bfloat16 | ### Dataset Collecting a large amount of high quality training data is a great challenge. Over the past years at Azurro, we have done a lot of projects connected with processing Big Data. Therefore, with our extensive experience, we have been able to prepare carefully selected training dataset quickly and efficiently. Our close collaboration with the Speakleash team has resulted in the creation of over 285GB of the Polish language text corpus. The process of preparing the training dataset involved transforming documents by applying various cleaning and repairing rules, followed by selecting documents of appropriate quality. Our training dataset contains: * 150 datasets from [Speakleash](https://speakleash.org) - 93% * other publicly available and crawled web data - 6% * Polish Wikipedia - 1% ### Quickstart This model can be easily loaded using the AutoModelForCausalLM functionality. ```python from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "Azurro/APT3-1B-Base" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) ``` In order to reduce the memory usage, you can use smaller precision (`bfloat16`). ```python import torch model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) ``` And then you can use Hugging Face Pipelines to generate text: ```python import transformers text = "NajwaΕΌniejszym celem czΕ‚owieka na ziemi jest" pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer) sequences = pipeline(max_new_tokens=100, do_sample=True, top_k=50, eos_token_id=tokenizer.eos_token_id, text_inputs=text) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` Generated output: > NajwaΕΌniejszym celem czΕ‚owieka na ziemi jest ΕΌycie w pokoju, harmonii i miΕ‚oΕ›ci. Dla kaΕΌdego z nas bardzo waΕΌne jest, aby otaczaΔ‡ siΔ™ kochanymi osobami. ## Limitations and Biases APT3-1B-Base is not intended for deployment without fine-tuning. It should not be used for human-facing interactions without further guardrails and user consent. APT3-1B-Base can produce factually incorrect output, and should not be relied on to produce factually accurate information. APT3-1B-Base was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. ## License Because of an unclear legal situation, we have decided to publish the model under CC BY NC 4.0 license - it allows for non-commercial use. The model can be used for scientific purposes and privately, as long as the license conditions are met. ## Disclaimer The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. ## Citation Please cite this model using the following format: ``` @online{AzurroAPT3Base1B, author = {Krzysztof Ociepa, Azurro}, title = {Introducing APT3-1B-Base: Polish Language Model}, year = {2024}, url = {www.azurro.pl/apt3-1b-base-en}, note = {Accessed: 2024-01-04}, % change this date urldate = {2024-01-04} % change this date } ``` ## Special thanks We would like to especially thank the [Speakleash](https://speakleash.org) team for collecting and sharing texts in Polish, and for the support we could always count on while preparing the training set for our model. Without you, it would not have been possible to train this model. Thank you! ## The Azurro Team Please find more information on the Azurro [homepage](https://azurro.pl). ## Contact Us If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, drop an email to [[email protected]](mailto:[email protected]).
{}
RichardErkhov/Azurro_-_APT3-1B-Base-8bits
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-05-03T09:12:11+00:00
null
null
{}
fazi999/fazimodel
null
[ "region:us" ]
null
2024-05-03T09:13:05+00:00
null
null
{"license": "apache-2.0"}
sh2orc/Meta-Llama-3-70B-Instruct-4bit
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-03T09:13:11+00:00
null
null
{}
cookey39/teratera0-01
null
[ "region:us" ]
null
2024-05-03T09:14:06+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Archan2607/vicuna_rlhf_v4.1
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:15:07+00:00
null
null
{}
zdfxcghjbknlm/first
null
[ "region:us" ]
null
2024-05-03T09:15:36+00:00
null
null
{}
Destr/diffusers_ckpt_step_328.zip
null
[ "region:us" ]
null
2024-05-03T09:15:40+00:00
null
transformers
# Uploaded model - **Developed by:** ssreeramj - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit"}
ssreeramj/insights_lora_model
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "mistral", "trl", "en", "base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:15:41+00:00
text-generation
transformers
# Phi-3 Mini-128K-Instruct ONNX model for onnxruntime-web This is the same models as the [official phi3 onnx model](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx) with a few changes to make it work for onnxruntime-web: 1. the model is fp16 with int4 block quantization for weights 2. the 'logits' output is fp32 3. the model uses MHA instead of GQA 4. onnx and external data file need to stay below 2GB to be cacheable in chromium
{"license": "mit", "tags": ["ONNX", "DML", "ONNXRuntime", "phi3", "nlp", "conversational", "custom_code"], "pipeline_tag": "text-generation"}
Xenova/Phi-3-mini-128k-instruct
null
[ "transformers", "onnx", "phi3", "text-generation", "ONNX", "DML", "ONNXRuntime", "nlp", "conversational", "custom_code", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:16:45+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) zephyr-speakleash-010-pl-3072-32-16-0.01 - bnb 8bits - Model creator: https://huggingface.co/Nondzu/ - Original model: https://huggingface.co/Nondzu/zephyr-speakleash-010-pl-3072-32-16-0.01/ Original model description: --- license: mit --- [speakleash.org](https://speakleash.org) ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```
{}
RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-8bits
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-05-03T09:16:48+00:00
null
transformers
# Uploaded model - **Developed by:** aminlouhichi - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
aminlouhichi/model_llamaGGUF
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:17:07+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
abc88767/model54
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:17:20+00:00
null
null
{}
ethann29/5_3_075_trainval_codetr
null
[ "region:us" ]
null
2024-05-03T09:17:22+00:00
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
Niggendar/4thTailHentaiModel_03
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-05-03T09:17:54+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
metythorn/donut-cord
null
[ "transformers", "safetensors", "vision-encoder-decoder", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:19:00+00:00
null
null
{}
ethann29/5_3_08_trainval_codetr
null
[ "region:us" ]
null
2024-05-03T09:19:10+00:00
null
null
{"license": "openrail"}
Loren85/Troy-McClure-Gpt-model
null
[ "license:openrail", "region:us" ]
null
2024-05-03T09:19:42+00:00
null
null
{}
Aragoner/phi-1-5-finetuned
null
[ "tensorboard", "safetensors", "region:us" ]
null
2024-05-03T09:19:56+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-1-5-finetuned-cazton_complete This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 1 ### Training results ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-1_5", "model-index": [{"name": "phi-1-5-finetuned-cazton_complete", "results": []}]}
alpdk1394/phi-1-5-finetuned-cazton_complete
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "base_model:microsoft/phi-1_5", "license:mit", "region:us" ]
null
2024-05-03T09:21:40+00:00
null
transformers
# Uploaded model - **Developed by:** animaRegem - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
animaRegem/llama-3-lora-malayalam
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:21:42+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
animaRegem/llama-3-lora-malayalam-tokenizer
null
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:22:07+00:00
token-classification
spacy
A Named Entity Recognition (NER) model to extract SKILL, EXPERIENCE and BENEFIT from job adverts. | Feature | Description | | --- | --- | | **Name** | `en_skillner` | | **Version** | `3.7.1` | | **spaCy** | `>=3.7.4,<3.8.0` | | **Default Pipeline** | `tok2vec`, `tagger`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` | | **Components** | `tok2vec`, `tagger`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` | | **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) | | **Sources** | [OntoNotes 5](https://catalog.ldc.upenn.edu/LDC2013T19) (Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, Mohammed El-Bachouti, Robert Belvin, Ann Houston)<br>[ClearNLP Constituent-to-Dependency Conversion](https://github.com/clir/clearnlp-guidelines/blob/master/md/components/dependency_conversion.md) (Emory University)<br>[WordNet 3.0](https://wordnet.princeton.edu/) (Princeton University)<br>[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) | | **License** | `MIT` | | **Author** | [nestauk](https://explosion.ai) | ### Label Scheme <details> <summary>View label scheme (3 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `SKILL`, `EXPERIENCE`, `BENEFIT` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_P` | 46.06 | | `ENTS_R` | 45.74 | | `ENTS_F` | 45.90 |
{"language": ["en"], "license": "mit", "tags": ["spacy", "token-classification"]}
nestauk/en_skillner
null
[ "spacy", "token-classification", "en", "license:mit", "model-index", "region:us" ]
null
2024-05-03T09:22:16+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base_brkfst This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0786 - Accuracy: 0.9811 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 27 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5591 | 0.71 | 10 | 0.3018 | 0.8774 | | 0.2997 | 1.43 | 20 | 0.2236 | 0.8679 | | 0.2096 | 2.14 | 30 | 0.1582 | 0.9340 | | 0.2465 | 2.86 | 40 | 0.1677 | 0.9623 | | 0.0823 | 3.57 | 50 | 0.2153 | 0.9528 | | 0.0682 | 4.29 | 60 | 0.2196 | 0.9528 | | 0.1015 | 5.0 | 70 | 0.0825 | 0.9717 | | 0.0364 | 5.71 | 80 | 0.1376 | 0.9623 | | 0.0606 | 6.43 | 90 | 0.1448 | 0.9717 | | 0.03 | 7.14 | 100 | 0.1107 | 0.9811 | | 0.0228 | 7.86 | 110 | 0.0810 | 0.9811 | | 0.003 | 8.57 | 120 | 0.0946 | 0.9811 | | 0.0182 | 9.29 | 130 | 0.0663 | 0.9906 | | 0.0126 | 10.0 | 140 | 0.1986 | 0.9717 | | 0.0006 | 10.71 | 150 | 0.0788 | 0.9811 | | 0.0003 | 11.43 | 160 | 0.0974 | 0.9811 | | 0.0003 | 12.14 | 170 | 0.1012 | 0.9811 | | 0.0005 | 12.86 | 180 | 0.0879 | 0.9811 | | 0.0003 | 13.57 | 190 | 0.0803 | 0.9811 | | 0.0002 | 14.29 | 200 | 0.0794 | 0.9811 | | 0.0002 | 15.0 | 210 | 0.0786 | 0.9811 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "roberta-base", "model-index": [{"name": "roberta-base_brkfst", "results": []}]}
JBhug/roberta-base_brkfst
null
[ "transformers", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:22:29+00:00
text-to-image
diffusers
AAM XL Anime Mix Anime Screencap Style Model Do you like what I do? Consider supporting me on Patreon πŸ…ΏοΈ or feel free to buy me a coffee β˜•. A ❀️, a kind comment or a review is greatly appreciated. Join my Discord Server This is essentially derived from AAM AnyLoRA Anime Mix, but it's based on SDXL. It won't work with loras and embeddings for SD1.5 even if they're trained on the original AAM or AnyLoRA. "Mix" as in mix of anime styles. I suggest you use CFG 5-7 (not higher than 8), 20-30 steps with Euler a. Upscalers suggestions are None (bicubic upscaling in comfyui), any good GAN for anime or Latent (only if you know what you're doing). For Turbo I suggest you use CFG 3-4 and 8 steps Euler a or 15 steps LCM. Unlike with DreamShaper Turbo, I think base AAM XL is better than AAM XL Turbo most of the time. However the latter is MUCH faster. Purpose of this model Make amazing anime style artworks on its own. Train character loras. Use anime styles. Generate anime art and stylized art. It can generate hen/tai on its own, but you might need loras and embeddings for specific stuff. ComfyUI Workflow: https://pastebin.com/BHCEzc6T Finetuned over "DreamShaper Anime", which is a mix of Anime Art Diffusion, Hassaku and DreamShaper XL. Finetuned using the AAM dataset. Fair AI Public License 1.0-SD
{"library_name": "diffusers"}
cookey39/aam_xl
null
[ "diffusers", "safetensors", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-05-03T09:22:38+00:00
text-generation
transformers
{}
mwalol/tacky-fennec-classifier-awq
null
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-03T09:23:03+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_pork_classifier This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0066 - F1: 0.9667 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0406 | 1.0 | 760 | 0.0126 | 0.8727 | | 0.0083 | 2.0 | 1520 | 0.0061 | 0.9667 | | 0.0017 | 3.0 | 2280 | 0.0061 | 0.9667 | | 0.0005 | 4.0 | 3040 | 0.0064 | 0.9667 | | 0.0001 | 5.0 | 3800 | 0.0066 | 0.9667 | ### Framework versions - Transformers 4.29.2 - Pytorch 1.13.1+cu116 - Datasets 2.19.0 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["f1"], "model-index": [{"name": "distilbert_pork_classifier", "results": []}]}
andikazf15/distilbert_pork_classifier
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:23:55+00:00
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
shtapm/whisper-large_0502_adapter_encoderall_and_decoder31_200steps
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:24:37+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/t63n3up
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:24:46+00:00
sentence-similarity
sentence-transformers
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 3850 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MatryoshkaLoss.MatryoshkaLoss` with parameters: ``` {'loss': 'CoSENTLoss', 'matryoshka_dims': [768, 512, 256, 128, 64], 'matryoshka_weights': [1, 1, 1, 1, 1], 'n_dims_per_step': -1} ``` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1540, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"}
hrusheekeshsawarkar/indic-sentence-bert-nli-matryoshka
null
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "transformers", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:25:13+00:00
null
null
{"license": "apache-2.0"}
Chandanv1989/phi-1_5-q4f16_1-MLC
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-03T09:26:20+00:00
null
transformers
# LeroyDyer/Mixtral_AI_Chat_1.0-Q4_K_S-GGUF This model was converted to GGUF format from [`LeroyDyer/Mixtral_AI_Chat_1.0`](https://huggingface.co/LeroyDyer/Mixtral_AI_Chat_1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/LeroyDyer/Mixtral_AI_Chat_1.0) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo LeroyDyer/Mixtral_AI_Chat_1.0-Q4_K_S-GGUF --model mixtral_ai_chat_1.0.Q4_K_S.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo LeroyDyer/Mixtral_AI_Chat_1.0-Q4_K_S-GGUF --model mixtral_ai_chat_1.0.Q4_K_S.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mixtral_ai_chat_1.0.Q4_K_S.gguf -n 128 ```
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "llama-cpp", "gguf-my-repo"], "base_model": "LeroyDyer/Mixtral_AI_Chat_2.0"}
LeroyDyer/Mixtral_AI_Chat_1.0-Q4_K_S-GGUF
null
[ "transformers", "gguf", "text-generation-inference", "unsloth", "mistral", "trl", "llama-cpp", "gguf-my-repo", "en", "base_model:LeroyDyer/Mixtral_AI_Chat_2.0", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:26:37+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
KevinKibe/whisper-c2translate
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:26:40+00:00
null
null
{"license": "apache-2.0"}
marufhasan008/Village
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-03T09:27:15+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RoBERTa_BART_hybrid_V1 This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the arrow dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 71 | 2.2072 | ### Framework versions - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["arrow"], "base_model": "facebook/bart-large", "model-index": [{"name": "RoBERTa_BART_hybrid_V1", "results": []}]}
MikaSie/RoBERTa_BART_hybrid_V1
null
[ "transformers", "safetensors", "bart", "text2text-generation", "generated_from_trainer", "dataset:arrow", "base_model:facebook/bart-large", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:27:15+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # RM-helpful_helpful_human_loraR64_20000_gemma2b_lr1e-05_bs2_g4 This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6261 - Accuracy: 0.6495 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6504 | 1.0 | 2246 | 0.6341 | 0.6455 | | 0.6246 | 2.0 | 4492 | 0.6261 | 0.6495 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "gemma", "library_name": "peft", "tags": ["trl", "reward-trainer", "generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/gemma-2b", "model-index": [{"name": "RM-helpful_helpful_human_loraR64_20000_gemma2b_lr1e-05_bs2_g4", "results": []}]}
Holarissun/RM-helpful_helpful_human_loraR64_20000_gemma2b_lr1e-05_bs2_g4
null
[ "peft", "safetensors", "trl", "reward-trainer", "generated_from_trainer", "base_model:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-05-03T09:27:19+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Audino/my-awesome-modelv5-bpara
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:27:39+00:00
null
null
{}
tanyakansal/arithBio-7B-gguf
null
[ "gguf", "region:us" ]
null
2024-05-03T09:28:39+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
Chat-Error/Llama-3-limarp
null
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:28:44+00:00
fill-mask
transformers
# DRAGON BERT base domain-specific Pretrained model on Dutch clinical reports using a masked language modeling (MLM) objective. It was introduced in [this](#pending) paper.&nbsp;The model was pretrained using domain-specific data (i.e., clinical reports) from scratch. The architecture is the same as [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) from HuggingFace. The tokenizer was fitted to the dataset of Dutch medical reports, using the same settings for the tokenizer as [`roberta-base`](https://huggingface.co/FacebookAI/roberta-base). ## Model description BERT is a transformers model that was pretrained on a large corpus of Dutch clinical reports in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way with an automatic process to generate inputs and labels from those texts. This way, the model learns an inner representation of the Dutch medical language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled reports, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model variations Multiple architectures were pretrained for the DRAGON challenge. | Model | #params | Language | |------------------------|--------------------------------|-------| | [`joeranbosma/dragon-bert-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-bert-base-mixed-domain) | 109M | Dutch β†’ Dutch | | [`joeranbosma/dragon-roberta-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-base-mixed-domain) | 278M | Multiple β†’ Dutch | | [`joeranbosma/dragon-roberta-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-large-mixed-domain) | 560M | Multiple β†’ Dutch | | [`joeranbosma/dragon-longformer-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-base-mixed-domain) | 149M | English β†’ Dutch | | [`joeranbosma/dragon-longformer-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-large-mixed-domain) | 435M | English β†’ Dutch | | [`joeranbosma/dragon-bert-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-bert-base-domain-specific) | 109M | Dutch | | [`joeranbosma/dragon-roberta-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-base-domain-specific) | 278M | Dutch | | [`joeranbosma/dragon-roberta-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-large-domain-specific) | 560M | Dutch | | [`joeranbosma/dragon-longformer-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-base-domain-specific) | 149M | Dutch | | [`joeranbosma/dragon-longformer-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-large-domain-specific) | 435M | Dutch | ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole text (e.g., a clinical report) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline("fill-mask", model="joeranbosma/dragon-bert-base-domain-specific") unmasker("Dit onderzoek geen aanwijzingen voor significant carcinoom. PIRADS <mask>.") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("joeranbosma/dragon-bert-base-domain-specific") model = AutoModel.from_pretrained("joeranbosma/dragon-bert-base-domain-specific") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors="pt") output = model(**encoded_input) ``` ## Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data For pretraining, 4,333,201 clinical reports (466,351 consecutive patients) were selected from Ziekenhuisgroep Twente from patients with a diagnostic or interventional visit between 13 July 2000 and 25 April 2023. 180,439 duplicate clinical reports (179,808 patients) were excluded, resulting in 4,152,762 included reports (463,692 patients). These reports were split into training (80%, 3,322,209 reports), validation (10%, 415,276 reports), and testing (10%, 415,277 reports). The testing reports were set aside for future analysis and are not used for pretraining. ## Training procedure ### Pretraining The model was pretrained using masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The HuggingFace implementation was used for pretraining: [`run_mlm.py`](https://github.com/huggingface/transformers/blob/7c6ec195adbfcd22cb6baeee64dd3c24a4b80c74/examples/pytorch/language-modeling/run_mlm.py). ### Pretraining hyperparameters The following hyperparameters were used during pretraining: - `learning_rate`: 6e-4 - `train_batch_size`: 16 - `eval_batch_size`: 16 - `seed`: 42 - `gradient_accumulation_steps`: 16 - `total_train_batch_size`: 256 - `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08 - `lr_scheduler_type`: linear - `num_epochs`: 10.0 - `max_seq_length`: 512 ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3 ## Evaluation results Pending evaluation on the DRAGON benchmark. ### BibTeX entry and citation info ```bibtex @article{PENDING} ```
{"license": "cc-by-nc-sa-4.0"}
joeranbosma/dragon-bert-base-domain-specific
null
[ "transformers", "pytorch", "bert", "fill-mask", "doi:10.57967/hf/2167", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:29:17+00:00
null
transformers
# Uploaded model - **Developed by:** Aryaduta - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-2b-bnb-4bit This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "gemma", "trl"], "base_model": "unsloth/gemma-2b-bnb-4bit"}
Aryaduta/llm_robot
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma", "trl", "en", "base_model:unsloth/gemma-2b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:29:36+00:00
null
null
{}
ivykopal/german_prompt_mlqa_prompt_100k
null
[ "region:us" ]
null
2024-05-03T09:29:53+00:00
text-generation
transformers
{"license": "apache-2.0"}
Gurminder/fairy
null
[ "transformers", "pytorch", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:29:55+00:00
null
null
{}
zeus546/best
null
[ "region:us" ]
null
2024-05-03T09:30:17+00:00
null
null
{}
LaylansVoice/GTAsaPistol
null
[ "region:us" ]
null
2024-05-03T09:33:04+00:00
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) zephyr-speakleash-010-pl-3072-32-16-0.01 - GGUF - Model creator: https://huggingface.co/Nondzu/ - Original model: https://huggingface.co/Nondzu/zephyr-speakleash-010-pl-3072-32-16-0.01/ | Name | Quant method | Size | | ---- | ---- | ---- | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q2_K.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q2_K.gguf) | Q2_K | 2.53GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.IQ3_XS.gguf) | IQ3_XS | 2.81GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.IQ3_S.gguf) | IQ3_S | 2.96GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q3_K_S.gguf) | Q3_K_S | 2.95GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.IQ3_M.gguf) | IQ3_M | 3.06GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q3_K.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q3_K.gguf) | Q3_K | 3.28GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q3_K_M.gguf) | Q3_K_M | 3.28GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q3_K_L.gguf) | Q3_K_L | 3.56GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.IQ4_XS.gguf) | IQ4_XS | 3.67GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q4_0.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q4_0.gguf) | Q4_0 | 3.83GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.IQ4_NL.gguf) | IQ4_NL | 3.87GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q4_K_S.gguf) | Q4_K_S | 3.86GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q4_K.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q4_K.gguf) | Q4_K | 4.07GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q4_K_M.gguf) | Q4_K_M | 4.07GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q4_1.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q4_1.gguf) | Q4_1 | 4.24GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q5_0.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q5_0.gguf) | Q5_0 | 4.65GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q5_K_S.gguf) | Q5_K_S | 4.65GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q5_K.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q5_K.gguf) | Q5_K | 4.78GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q5_K_M.gguf) | Q5_K_M | 4.78GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q5_1.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q5_1.gguf) | Q5_1 | 5.07GB | | [zephyr-speakleash-010-pl-3072-32-16-0.01.Q6_K.gguf](https://huggingface.co/RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf/blob/main/zephyr-speakleash-010-pl-3072-32-16-0.01.Q6_K.gguf) | Q6_K | 5.53GB | Original model description: --- license: mit --- [speakleash.org](https://speakleash.org) ## Prompt template: ChatML ``` <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```
{}
RichardErkhov/Nondzu_-_zephyr-speakleash-010-pl-3072-32-16-0.01-gguf
null
[ "gguf", "region:us" ]
null
2024-05-03T09:33:18+00:00
null
null
{}
optimum-internal-testing/optimum-neuron-cache-for-testing-jndre
null
[ "region:us" ]
null
2024-05-03T09:33:54+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/l2u46k7
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:33:58+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mamba-130m-hf - bnb 4bits - Model creator: https://huggingface.co/state-spaces/ - Original model: https://huggingface.co/state-spaces/mamba-130m-hf/ Original model description: --- library_name: transformers tags: [] --- # Mamba <!-- Provide a quick summary of what the model is/does. --> This repository contains the `transfromers` compatible `mamba-2.8b`. The checkpoints are untouched, but the full `config.json` and tokenizer are pushed to this repo. # Usage You need to install `transformers` from `main` until `transformers=4.39.0` is released. ```bash pip install git+https://github.com/huggingface/transformers@main ``` We also recommend you to install both `causal_conv_1d` and `mamba-ssm` using: ```bash pip install causal-conv1d>=1.2.0 pip install mamba-ssm ``` If any of these two is not installed, the "eager" implementation will be used. Otherwise the more optimised `cuda` kernels will be used. ## Generation You can use the classic `generate` API: ```python >>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf") >>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m-hf") >>> input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"] >>> out = model.generate(input_ids, max_new_tokens=10) >>> print(tokenizer.batch_decode(out)) ["Hey how are you doing?\n\nI'm so glad you're here."] ``` ## PEFT finetuning example In order to finetune using the `peft` library, we recommend keeping the model in float32! ```python from datasets import load_dataset from trl import SFTTrainer from peft import LoraConfig from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf") model = AutoModelForCausalLM.from_pretrained("state-spaces/mamba-130m-hf") dataset = load_dataset("Abirate/english_quotes", split="train") training_args = TrainingArguments( output_dir="./results", num_train_epochs=3, per_device_train_batch_size=4, logging_dir='./logs', logging_steps=10, learning_rate=2e-3 ) lora_config = LoraConfig( r=8, target_modules=["x_proj", "embeddings", "in_proj", "out_proj"], task_type="CAUSAL_LM", bias="none" ) trainer = SFTTrainer( model=model, tokenizer=tokenizer, args=training_args, peft_config=lora_config, train_dataset=dataset, dataset_text_field="quote", ) trainer.train() ```
{}
RichardErkhov/state-spaces_-_mamba-130m-hf-4bits
null
[ "transformers", "safetensors", "mamba", "text-generation", "autotrain_compatible", "endpoints_compatible", "4-bit", "region:us" ]
null
2024-05-03T09:34:36+00:00
text-generation
transformers
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mamba-130m-hf - bnb 8bits - Model creator: https://huggingface.co/state-spaces/ - Original model: https://huggingface.co/state-spaces/mamba-130m-hf/ Original model description: --- library_name: transformers tags: [] --- # Mamba <!-- Provide a quick summary of what the model is/does. --> This repository contains the `transfromers` compatible `mamba-2.8b`. The checkpoints are untouched, but the full `config.json` and tokenizer are pushed to this repo. # Usage You need to install `transformers` from `main` until `transformers=4.39.0` is released. ```bash pip install git+https://github.com/huggingface/transformers@main ``` We also recommend you to install both `causal_conv_1d` and `mamba-ssm` using: ```bash pip install causal-conv1d>=1.2.0 pip install mamba-ssm ``` If any of these two is not installed, the "eager" implementation will be used. Otherwise the more optimised `cuda` kernels will be used. ## Generation You can use the classic `generate` API: ```python >>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf") >>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m-hf") >>> input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"] >>> out = model.generate(input_ids, max_new_tokens=10) >>> print(tokenizer.batch_decode(out)) ["Hey how are you doing?\n\nI'm so glad you're here."] ``` ## PEFT finetuning example In order to finetune using the `peft` library, we recommend keeping the model in float32! ```python from datasets import load_dataset from trl import SFTTrainer from peft import LoraConfig from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf") model = AutoModelForCausalLM.from_pretrained("state-spaces/mamba-130m-hf") dataset = load_dataset("Abirate/english_quotes", split="train") training_args = TrainingArguments( output_dir="./results", num_train_epochs=3, per_device_train_batch_size=4, logging_dir='./logs', logging_steps=10, learning_rate=2e-3 ) lora_config = LoraConfig( r=8, target_modules=["x_proj", "embeddings", "in_proj", "out_proj"], task_type="CAUSAL_LM", bias="none" ) trainer = SFTTrainer( model=model, tokenizer=tokenizer, args=training_args, peft_config=lora_config, train_dataset=dataset, dataset_text_field="quote", ) trainer.train() ```
{}
RichardErkhov/state-spaces_-_mamba-130m-hf-8bits
null
[ "transformers", "safetensors", "mamba", "text-generation", "autotrain_compatible", "endpoints_compatible", "8-bit", "region:us" ]
null
2024-05-03T09:35:04+00:00
null
null
{}
salmoosh/llava-1.5-7b-hf-ft-mix-vsft
null
[ "region:us" ]
null
2024-05-03T09:35:34+00:00
null
null
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) mamba-130m-hf - GGUF - Model creator: https://huggingface.co/state-spaces/ - Original model: https://huggingface.co/state-spaces/mamba-130m-hf/ | Name | Quant method | Size | | ---- | ---- | ---- | | [mamba-130m-hf.Q2_K.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q2_K.gguf) | Q2_K | 0.08GB | | [mamba-130m-hf.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.IQ3_XS.gguf) | IQ3_XS | 0.09GB | | [mamba-130m-hf.IQ3_S.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.IQ3_S.gguf) | IQ3_S | 0.09GB | | [mamba-130m-hf.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q3_K_S.gguf) | Q3_K_S | 0.09GB | | [mamba-130m-hf.IQ3_M.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.IQ3_M.gguf) | IQ3_M | 0.09GB | | [mamba-130m-hf.Q3_K.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q3_K.gguf) | Q3_K | 0.09GB | | [mamba-130m-hf.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q3_K_M.gguf) | Q3_K_M | 0.09GB | | [mamba-130m-hf.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q3_K_L.gguf) | Q3_K_L | 0.09GB | | [mamba-130m-hf.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.IQ4_XS.gguf) | IQ4_XS | 0.09GB | | [mamba-130m-hf.Q4_0.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q4_0.gguf) | Q4_0 | 0.1GB | | [mamba-130m-hf.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.IQ4_NL.gguf) | IQ4_NL | 0.1GB | | [mamba-130m-hf.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q4_K_S.gguf) | Q4_K_S | 0.1GB | | [mamba-130m-hf.Q4_K.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q4_K.gguf) | Q4_K | 0.1GB | | [mamba-130m-hf.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q4_K_M.gguf) | Q4_K_M | 0.1GB | | [mamba-130m-hf.Q4_1.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q4_1.gguf) | Q4_1 | 0.1GB | | [mamba-130m-hf.Q5_0.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q5_0.gguf) | Q5_0 | 0.11GB | | [mamba-130m-hf.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q5_K_S.gguf) | Q5_K_S | 0.11GB | | [mamba-130m-hf.Q5_K.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q5_K.gguf) | Q5_K | 0.11GB | | [mamba-130m-hf.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q5_K_M.gguf) | Q5_K_M | 0.11GB | | [mamba-130m-hf.Q5_1.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q5_1.gguf) | Q5_1 | 0.11GB | | [mamba-130m-hf.Q6_K.gguf](https://huggingface.co/RichardErkhov/state-spaces_-_mamba-130m-hf-gguf/blob/main/mamba-130m-hf.Q6_K.gguf) | Q6_K | 0.12GB | Original model description: --- library_name: transformers tags: [] --- # Mamba <!-- Provide a quick summary of what the model is/does. --> This repository contains the `transfromers` compatible `mamba-2.8b`. The checkpoints are untouched, but the full `config.json` and tokenizer are pushed to this repo. # Usage You need to install `transformers` from `main` until `transformers=4.39.0` is released. ```bash pip install git+https://github.com/huggingface/transformers@main ``` We also recommend you to install both `causal_conv_1d` and `mamba-ssm` using: ```bash pip install causal-conv1d>=1.2.0 pip install mamba-ssm ``` If any of these two is not installed, the "eager" implementation will be used. Otherwise the more optimised `cuda` kernels will be used. ## Generation You can use the classic `generate` API: ```python >>> from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf") >>> model = MambaForCausalLM.from_pretrained("state-spaces/mamba-130m-hf") >>> input_ids = tokenizer("Hey how are you doing?", return_tensors="pt")["input_ids"] >>> out = model.generate(input_ids, max_new_tokens=10) >>> print(tokenizer.batch_decode(out)) ["Hey how are you doing?\n\nI'm so glad you're here."] ``` ## PEFT finetuning example In order to finetune using the `peft` library, we recommend keeping the model in float32! ```python from datasets import load_dataset from trl import SFTTrainer from peft import LoraConfig from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments tokenizer = AutoTokenizer.from_pretrained("state-spaces/mamba-130m-hf") model = AutoModelForCausalLM.from_pretrained("state-spaces/mamba-130m-hf") dataset = load_dataset("Abirate/english_quotes", split="train") training_args = TrainingArguments( output_dir="./results", num_train_epochs=3, per_device_train_batch_size=4, logging_dir='./logs', logging_steps=10, learning_rate=2e-3 ) lora_config = LoraConfig( r=8, target_modules=["x_proj", "embeddings", "in_proj", "out_proj"], task_type="CAUSAL_LM", bias="none" ) trainer = SFTTrainer( model=model, tokenizer=tokenizer, args=training_args, peft_config=lora_config, train_dataset=dataset, dataset_text_field="quote", ) trainer.train() ```
{}
RichardErkhov/state-spaces_-_mamba-130m-hf-gguf
null
[ "gguf", "region:us" ]
null
2024-05-03T09:35:42+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOpeepeepoopoo/herewegoagain10
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:36:27+00:00
text-generation
transformers
# D_AU-Tiefighter-Holomax-20B-V1 D_AU-Tiefighter-Holomax-20B-V1 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) * [KoboldAI/LLaMA2-13B-Holomax](https://huggingface.co/KoboldAI/LLaMA2-13B-Holomax) * [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) * [KoboldAI/LLaMA2-13B-Holomax](https://huggingface.co/KoboldAI/LLaMA2-13B-Holomax) * [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) * [KoboldAI/LLaMA2-13B-Holomax](https://huggingface.co/KoboldAI/LLaMA2-13B-Holomax) * [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) * [KoboldAI/LLaMA2-13B-Holomax](https://huggingface.co/KoboldAI/LLaMA2-13B-Holomax) * [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter) ## 🧩 Configuration ```yaml slices: - sources: - model: KoboldAI/LLaMA2-13B-Tiefighter layer_range: [0, 10] - sources: - model: KoboldAI/LLaMA2-13B-Holomax layer_range: [11,15] - sources: - model: KoboldAI/LLaMA2-13B-Tiefighter layer_range: [16,20] - sources: - model: KoboldAI/LLaMA2-13B-Holomax layer_range: [16,22] - sources: - model: KoboldAI/LLaMA2-13B-Tiefighter layer_range: [21, 30] - sources: - model: KoboldAI/LLaMA2-13B-Holomax layer_range: [31,33] - sources: - model: KoboldAI/LLaMA2-13B-Tiefighter layer_range: [31,35] - sources: - model: KoboldAI/LLaMA2-13B-Holomax layer_range: [36,40] - sources: - model: KoboldAI/LLaMA2-13B-Tiefighter layer_range: [36,40] merge_method: passthrough dtype: bfloat16 ``` ## πŸ’» Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "DavidAU/D_AU-Tiefighter-Holomax-20B-V1" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "KoboldAI/LLaMA2-13B-Tiefighter", "KoboldAI/LLaMA2-13B-Holomax"], "base_model": ["KoboldAI/LLaMA2-13B-Tiefighter", "KoboldAI/LLaMA2-13B-Holomax", "KoboldAI/LLaMA2-13B-Tiefighter", "KoboldAI/LLaMA2-13B-Holomax", "KoboldAI/LLaMA2-13B-Tiefighter", "KoboldAI/LLaMA2-13B-Holomax", "KoboldAI/LLaMA2-13B-Tiefighter", "KoboldAI/LLaMA2-13B-Holomax", "KoboldAI/LLaMA2-13B-Tiefighter"]}
DavidAU/D_AU-Tiefighter-Holomax-20B-V1
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "KoboldAI/LLaMA2-13B-Tiefighter", "KoboldAI/LLaMA2-13B-Holomax", "base_model:KoboldAI/LLaMA2-13B-Tiefighter", "base_model:KoboldAI/LLaMA2-13B-Holomax", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:36:37+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vasista_te_small-arthink This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Google Fleurs dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 1000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.3.0+cpu - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["te"], "license": "apache-2.0", "tags": ["hf-asr-leaderboard", "generated_from_trainer"], "datasets": ["google/fleurs"], "base_model": "openai/whisper-small", "model-index": [{"name": "vasista_te_small-arthink", "results": []}]}
April01524/ref_vasista_telugu_base
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "te", "dataset:google/fleurs", "base_model:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:36:42+00:00
token-classification
transformers
{}
Daisyyy05/biobert-finetuned-ner
null
[ "transformers", "safetensors", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:38:03+00:00
null
null
{}
Kudod/vistral-7B_finetuned_A100_3thMay
null
[ "region:us" ]
null
2024-05-03T09:38:03+00:00
null
null
{}
Higgs201/llama-3-8b-bnb-4bit-mental-health
null
[ "region:us" ]
null
2024-05-03T09:38:08+00:00
fill-mask
transformers
# DRAGON RoBERTa base mixed-domain Pretrained model on Dutch clinical reports using a masked language modeling (MLM) objective. It was introduced in [this](#pending) paper.&nbsp;The model was first pretrained using general domain data, as specified [here](https://huggingface.co/xlm-roberta-base). The pretrained model was taken from HuggingFace: [`xlm-roberta-base`](https://huggingface.co/xlm-roberta-base). Subsequently, the model was pretrained using domain-specific data (i.e., clinical reports). The tokenizer of [`xlm-roberta-base`](https://huggingface.co/xlm-roberta-base) was used. ## Model description RoBERTa is a transformers model that was pretrained on a large corpus of Dutch clinical reports in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way with an automatic process to generate inputs and labels from those texts. This way, the model learns an inner representation of the Dutch medical language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled reports, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model variations Multiple architectures were pretrained for the DRAGON challenge. | Model | #params | Language | |------------------------|--------------------------------|-------| | [`joeranbosma/dragon-bert-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-bert-base-mixed-domain) | 109M | Dutch β†’ Dutch | | [`joeranbosma/dragon-roberta-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-base-mixed-domain) | 278M | Multiple β†’ Dutch | | [`joeranbosma/dragon-roberta-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-large-mixed-domain) | 560M | Multiple β†’ Dutch | | [`joeranbosma/dragon-longformer-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-base-mixed-domain) | 149M | English β†’ Dutch | | [`joeranbosma/dragon-longformer-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-large-mixed-domain) | 435M | English β†’ Dutch | | [`joeranbosma/dragon-bert-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-bert-base-domain-specific) | 109M | Dutch | | [`joeranbosma/dragon-roberta-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-base-domain-specific) | 278M | Dutch | | [`joeranbosma/dragon-roberta-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-large-domain-specific) | 560M | Dutch | | [`joeranbosma/dragon-longformer-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-base-domain-specific) | 149M | Dutch | | [`joeranbosma/dragon-longformer-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-large-domain-specific) | 435M | Dutch | ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole text (e.g., a clinical report) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline("fill-mask", model="joeranbosma/dragon-roberta-base-mixed-domain") unmasker("Dit onderzoek geen aanwijzingen voor significant carcinoom. PIRADS <mask>.") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("joeranbosma/dragon-roberta-base-mixed-domain") model = AutoModel.from_pretrained("joeranbosma/dragon-roberta-base-mixed-domain") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors="pt") output = model(**encoded_input) ``` ## Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data For pretraining, 4,333,201 clinical reports (466,351 consecutive patients) were selected from Ziekenhuisgroep Twente from patients with a diagnostic or interventional visit between 13 July 2000 and 25 April 2023. 180,439 duplicate clinical reports (179,808 patients) were excluded, resulting in 4,152,762 included reports (463,692 patients). These reports were split into training (80%, 3,322,209 reports), validation (10%, 415,276 reports), and testing (10%, 415,277 reports). The testing reports were set aside for future analysis and are not used for pretraining. ## Training procedure ### Pretraining The model was pretrained using masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The HuggingFace implementation was used for pretraining: [`run_mlm.py`](https://github.com/huggingface/transformers/blob/7c6ec195adbfcd22cb6baeee64dd3c24a4b80c74/examples/pytorch/language-modeling/run_mlm.py). ### Pretraining hyperparameters The following hyperparameters were used during pretraining: - `learning_rate`: 5e-05 - `train_batch_size`: 4 - `eval_batch_size`: 4 - `seed`: 42 - `gradient_accumulation_steps`: 4 - `total_train_batch_size`: 16 - `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08 - `lr_scheduler_type`: linear - `num_epochs`: 3.0 - `max_seq_length`: 512 ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3 ## Evaluation results Pending evaluation on the DRAGON benchmark. ### BibTeX entry and citation info ```bibtex @article{PENDING} ```
{"license": "cc-by-nc-sa-4.0"}
joeranbosma/dragon-roberta-base-mixed-domain
null
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "doi:10.57967/hf/2168", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:38:13+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi2-16bit This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - training_steps: 128 ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.1 - Pytorch 2.1.2 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "phi2-16bit", "results": []}]}
uzzivirus/phi2-16bit
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-05-03T09:38:40+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "orpo"]}
DuongTrongChi/opp
null
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "orpo", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-03T09:40:17+00:00
text-to-image
diffusers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "diffusers"}
Niggendar/duchaitenPonyXLNo_v20
null
[ "diffusers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
null
2024-05-03T09:40:22+00:00
fill-mask
transformers
# DRAGON BERT base mixed-domain Pretrained model on Dutch clinical reports using a masked language modeling (MLM) objective. It was introduced in [this](#pending) paper.&nbsp;The model was first pretrained using general domain data, as specified [here](https://huggingface.co/bert-base-multilingual-cased). The pretrained model was taken from HuggingFace: [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased). Subsequently, the model was pretrained using domain-specific data (i.e., clinical reports). The tokenizer of [`bert-base-multilingual-cased`](https://huggingface.co/bert-base-multilingual-cased) was used. ## Model description BERT is a transformers model that was pretrained on a large corpus of Dutch clinical reports in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way with an automatic process to generate inputs and labels from those texts. This way, the model learns an inner representation of the Dutch medical language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled reports, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model variations Multiple architectures were pretrained for the DRAGON challenge. | Model | #params | Language | |------------------------|--------------------------------|-------| | [`joeranbosma/dragon-bert-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-bert-base-mixed-domain) | 109M | Dutch β†’ Dutch | | [`joeranbosma/dragon-roberta-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-base-mixed-domain) | 278M | Multiple β†’ Dutch | | [`joeranbosma/dragon-roberta-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-large-mixed-domain) | 560M | Multiple β†’ Dutch | | [`joeranbosma/dragon-longformer-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-base-mixed-domain) | 149M | English β†’ Dutch | | [`joeranbosma/dragon-longformer-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-large-mixed-domain) | 435M | English β†’ Dutch | | [`joeranbosma/dragon-bert-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-bert-base-domain-specific) | 109M | Dutch | | [`joeranbosma/dragon-roberta-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-base-domain-specific) | 278M | Dutch | | [`joeranbosma/dragon-roberta-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-large-domain-specific) | 560M | Dutch | | [`joeranbosma/dragon-longformer-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-base-domain-specific) | 149M | Dutch | | [`joeranbosma/dragon-longformer-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-large-domain-specific) | 435M | Dutch | ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole text (e.g., a clinical report) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline("fill-mask", model="joeranbosma/dragon-bert-base-mixed-domain") unmasker("Dit onderzoek geen aanwijzingen voor significant carcinoom. PIRADS <mask>.") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("joeranbosma/dragon-bert-base-mixed-domain") model = AutoModel.from_pretrained("joeranbosma/dragon-bert-base-mixed-domain") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors="pt") output = model(**encoded_input) ``` ## Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data For pretraining, 4,333,201 clinical reports (466,351 consecutive patients) were selected from Ziekenhuisgroep Twente from patients with a diagnostic or interventional visit between 13 July 2000 and 25 April 2023. 180,439 duplicate clinical reports (179,808 patients) were excluded, resulting in 4,152,762 included reports (463,692 patients). These reports were split into training (80%, 3,322,209 reports), validation (10%, 415,276 reports), and testing (10%, 415,277 reports). The testing reports were set aside for future analysis and are not used for pretraining. ## Training procedure ### Pretraining The model was pretrained using masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The HuggingFace implementation was used for pretraining: [`run_mlm.py`](https://github.com/huggingface/transformers/blob/7c6ec195adbfcd22cb6baeee64dd3c24a4b80c74/examples/pytorch/language-modeling/run_mlm.py). ### Pretraining hyperparameters The following hyperparameters were used during pretraining: - `learning_rate`: 5e-05 - `train_batch_size`: 4 - `eval_batch_size`: 4 - `seed`: 42 - `gradient_accumulation_steps`: 4 - `total_train_batch_size`: 16 - `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08 - `lr_scheduler_type`: linear - `num_epochs`: 3.0 - `max_seq_length`: 512 ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3 ## Evaluation results Pending evaluation on the DRAGON benchmark. ### BibTeX entry and citation info ```bibtex @article{PENDING} ```
{"license": "cc-by-nc-sa-4.0"}
joeranbosma/dragon-bert-base-mixed-domain
null
[ "transformers", "pytorch", "bert", "fill-mask", "doi:10.57967/hf/2166", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:41:14+00:00
fill-mask
transformers
# DRAGON RoBERTa large mixed-domain Pretrained model on Dutch clinical reports using a masked language modeling (MLM) objective. It was introduced in [this](#pending) paper.&nbsp;The model was first pretrained using general domain data, as specified [here](https://huggingface.co/xlm-roberta-large). The pretrained model was taken from HuggingFace: [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large). Subsequently, the model was pretrained using domain-specific data (i.e., clinical reports). The tokenizer of [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large) was used. ## Model description RoBERTa is a transformers model that was pretrained on a large corpus of Dutch clinical reports in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way with an automatic process to generate inputs and labels from those texts. This way, the model learns an inner representation of the Dutch medical language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled reports, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model variations Multiple architectures were pretrained for the DRAGON challenge. | Model | #params | Language | |------------------------|--------------------------------|-------| | [`joeranbosma/dragon-bert-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-bert-base-mixed-domain) | 109M | Dutch β†’ Dutch | | [`joeranbosma/dragon-roberta-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-base-mixed-domain) | 278M | Multiple β†’ Dutch | | [`joeranbosma/dragon-roberta-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-large-mixed-domain) | 560M | Multiple β†’ Dutch | | [`joeranbosma/dragon-longformer-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-base-mixed-domain) | 149M | English β†’ Dutch | | [`joeranbosma/dragon-longformer-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-large-mixed-domain) | 435M | English β†’ Dutch | | [`joeranbosma/dragon-bert-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-bert-base-domain-specific) | 109M | Dutch | | [`joeranbosma/dragon-roberta-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-base-domain-specific) | 278M | Dutch | | [`joeranbosma/dragon-roberta-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-large-domain-specific) | 560M | Dutch | | [`joeranbosma/dragon-longformer-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-base-domain-specific) | 149M | Dutch | | [`joeranbosma/dragon-longformer-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-large-domain-specific) | 435M | Dutch | ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole text (e.g., a clinical report) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline("fill-mask", model="joeranbosma/dragon-roberta-large-mixed-domain") unmasker("Dit onderzoek geen aanwijzingen voor significant carcinoom. PIRADS <mask>.") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("joeranbosma/dragon-roberta-large-mixed-domain") model = AutoModel.from_pretrained("joeranbosma/dragon-roberta-large-mixed-domain") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors="pt") output = model(**encoded_input) ``` ## Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data For pretraining, 4,333,201 clinical reports (466,351 consecutive patients) were selected from Ziekenhuisgroep Twente from patients with a diagnostic or interventional visit between 13 July 2000 and 25 April 2023. 180,439 duplicate clinical reports (179,808 patients) were excluded, resulting in 4,152,762 included reports (463,692 patients). These reports were split into training (80%, 3,322,209 reports), validation (10%, 415,276 reports), and testing (10%, 415,277 reports). The testing reports were set aside for future analysis and are not used for pretraining. ## Training procedure ### Pretraining The model was pretrained using masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The HuggingFace implementation was used for pretraining: [`run_mlm.py`](https://github.com/huggingface/transformers/blob/7c6ec195adbfcd22cb6baeee64dd3c24a4b80c74/examples/pytorch/language-modeling/run_mlm.py). ### Pretraining hyperparameters The following hyperparameters were used during pretraining: - `learning_rate`: 5e-05 - `train_batch_size`: 4 - `eval_batch_size`: 4 - `seed`: 42 - `gradient_accumulation_steps`: 4 - `total_train_batch_size`: 16 - `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08 - `lr_scheduler_type`: linear - `num_epochs`: 3.0 - `max_seq_length`: 512 ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3 ## Evaluation results Pending evaluation on the DRAGON benchmark. ### BibTeX entry and citation info ```bibtex @article{PENDING} ```
{"license": "cc-by-nc-sa-4.0"}
joeranbosma/dragon-roberta-large-mixed-domain
null
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "doi:10.57967/hf/2170", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:41:21+00:00
fill-mask
transformers
# DRAGON Longformer base mixed-domain Pretrained model on Dutch clinical reports using a masked language modeling (MLM) objective. It was introduced in [this](#pending) paper.&nbsp;The model was first pretrained using general domain data, as specified [here](https://huggingface.co/allenai/longformer-base-4096). The pretrained model was taken from HuggingFace: [`allenai/longformer-base-4096`](https://huggingface.co/allenai/longformer-base-4096). Subsequently, the model was pretrained using domain-specific data (i.e., clinical reports). The tokenizer of [`allenai/longformer-base-4096`](https://huggingface.co/allenai/longformer-base-4096) was used. ## Model description Longformer is a transformers model that was pretrained on a large corpus of Dutch clinical reports in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way with an automatic process to generate inputs and labels from those texts. This way, the model learns an inner representation of the Dutch medical language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled reports, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model variations Multiple architectures were pretrained for the DRAGON challenge. | Model | #params | Language | |------------------------|--------------------------------|-------| | [`joeranbosma/dragon-bert-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-bert-base-mixed-domain) | 109M | Dutch β†’ Dutch | | [`joeranbosma/dragon-roberta-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-base-mixed-domain) | 278M | Multiple β†’ Dutch | | [`joeranbosma/dragon-roberta-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-large-mixed-domain) | 560M | Multiple β†’ Dutch | | [`joeranbosma/dragon-longformer-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-base-mixed-domain) | 149M | English β†’ Dutch | | [`joeranbosma/dragon-longformer-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-large-mixed-domain) | 435M | English β†’ Dutch | | [`joeranbosma/dragon-bert-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-bert-base-domain-specific) | 109M | Dutch | | [`joeranbosma/dragon-roberta-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-base-domain-specific) | 278M | Dutch | | [`joeranbosma/dragon-roberta-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-large-domain-specific) | 560M | Dutch | | [`joeranbosma/dragon-longformer-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-base-domain-specific) | 149M | Dutch | | [`joeranbosma/dragon-longformer-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-large-domain-specific) | 435M | Dutch | ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole text (e.g., a clinical report) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline("fill-mask", model="joeranbosma/dragon-longformer-base-mixed-domain") unmasker("Dit onderzoek geen aanwijzingen voor significant carcinoom. PIRADS <mask>.") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("joeranbosma/dragon-longformer-base-mixed-domain") model = AutoModel.from_pretrained("joeranbosma/dragon-longformer-base-mixed-domain") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors="pt") output = model(**encoded_input) ``` ## Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data For pretraining, 4,333,201 clinical reports (466,351 consecutive patients) were selected from Ziekenhuisgroep Twente from patients with a diagnostic or interventional visit between 13 July 2000 and 25 April 2023. 180,439 duplicate clinical reports (179,808 patients) were excluded, resulting in 4,152,762 included reports (463,692 patients). These reports were split into training (80%, 3,322,209 reports), validation (10%, 415,276 reports), and testing (10%, 415,277 reports). The testing reports were set aside for future analysis and are not used for pretraining. ## Training procedure ### Pretraining The model was pretrained using masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The HuggingFace implementation was used for pretraining: [`run_mlm.py`](https://github.com/huggingface/transformers/blob/7c6ec195adbfcd22cb6baeee64dd3c24a4b80c74/examples/pytorch/language-modeling/run_mlm.py). ### Pretraining hyperparameters The following hyperparameters were used during pretraining: - `learning_rate`: 5e-05 - `train_batch_size`: 2 - `eval_batch_size`: 2 - `seed`: 42 - `gradient_accumulation_steps`: 8 - `total_train_batch_size`: 16 - `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08 - `lr_scheduler_type`: linear - `num_epochs`: 3.0 - `max_seq_length`: 4096 ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3 ## Evaluation results Pending evaluation on the DRAGON benchmark. ### BibTeX entry and citation info ```bibtex @article{PENDING} ```
{"license": "cc-by-nc-sa-4.0"}
joeranbosma/dragon-longformer-base-mixed-domain
null
[ "transformers", "pytorch", "longformer", "fill-mask", "doi:10.57967/hf/2172", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:41:23+00:00
fill-mask
transformers
# DRAGON Longformer large mixed-domain Pretrained model on Dutch clinical reports using a masked language modeling (MLM) objective. It was introduced in [this](#pending) paper.&nbsp;The model was first pretrained using general domain data, as specified [here](https://huggingface.co/allenai/longformer-large-4096). The pretrained model was taken from HuggingFace: [`allenai/longformer-large-4096`](https://huggingface.co/allenai/longformer-large-4096). Subsequently, the model was pretrained using domain-specific data (i.e., clinical reports). The tokenizer of [`allenai/longformer-large-4096`](https://huggingface.co/allenai/longformer-large-4096) was used. ## Model description Longformer is a transformers model that was pretrained on a large corpus of Dutch clinical reports in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way with an automatic process to generate inputs and labels from those texts. This way, the model learns an inner representation of the Dutch medical language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled reports, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model variations Multiple architectures were pretrained for the DRAGON challenge. | Model | #params | Language | |------------------------|--------------------------------|-------| | [`joeranbosma/dragon-bert-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-bert-base-mixed-domain) | 109M | Dutch β†’ Dutch | | [`joeranbosma/dragon-roberta-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-base-mixed-domain) | 278M | Multiple β†’ Dutch | | [`joeranbosma/dragon-roberta-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-large-mixed-domain) | 560M | Multiple β†’ Dutch | | [`joeranbosma/dragon-longformer-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-base-mixed-domain) | 149M | English β†’ Dutch | | [`joeranbosma/dragon-longformer-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-large-mixed-domain) | 435M | English β†’ Dutch | | [`joeranbosma/dragon-bert-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-bert-base-domain-specific) | 109M | Dutch | | [`joeranbosma/dragon-roberta-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-base-domain-specific) | 278M | Dutch | | [`joeranbosma/dragon-roberta-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-large-domain-specific) | 560M | Dutch | | [`joeranbosma/dragon-longformer-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-base-domain-specific) | 149M | Dutch | | [`joeranbosma/dragon-longformer-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-large-domain-specific) | 435M | Dutch | ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole text (e.g., a clinical report) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline("fill-mask", model="joeranbosma/dragon-longformer-large-mixed-domain") unmasker("Dit onderzoek geen aanwijzingen voor significant carcinoom. PIRADS <mask>.") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("joeranbosma/dragon-longformer-large-mixed-domain") model = AutoModel.from_pretrained("joeranbosma/dragon-longformer-large-mixed-domain") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors="pt") output = model(**encoded_input) ``` ## Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data For pretraining, 4,333,201 clinical reports (466,351 consecutive patients) were selected from Ziekenhuisgroep Twente from patients with a diagnostic or interventional visit between 13 July 2000 and 25 April 2023. 180,439 duplicate clinical reports (179,808 patients) were excluded, resulting in 4,152,762 included reports (463,692 patients). These reports were split into training (80%, 3,322,209 reports), validation (10%, 415,276 reports), and testing (10%, 415,277 reports). The testing reports were set aside for future analysis and are not used for pretraining. ## Training procedure ### Pretraining The model was pretrained using masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The HuggingFace implementation was used for pretraining: [`run_mlm.py`](https://github.com/huggingface/transformers/blob/7c6ec195adbfcd22cb6baeee64dd3c24a4b80c74/examples/pytorch/language-modeling/run_mlm.py). ### Pretraining hyperparameters The following hyperparameters were used during pretraining: - `learning_rate`: 5e-05 - `train_batch_size`: 4 - `eval_batch_size`: 4 - `seed`: 42 - `gradient_accumulation_steps`: 4 - `total_train_batch_size`: 16 - `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08 - `lr_scheduler_type`: linear - `num_epochs`: 3.0 - `max_seq_length`: 4096 ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3 ## Evaluation results Pending evaluation on the DRAGON benchmark. ### BibTeX entry and citation info ```bibtex @article{PENDING} ```
{"license": "cc-by-nc-sa-4.0"}
joeranbosma/dragon-longformer-large-mixed-domain
null
[ "transformers", "pytorch", "longformer", "fill-mask", "doi:10.57967/hf/2174", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:41:26+00:00
fill-mask
transformers
# DRAGON RoBERTa large domain-specific Pretrained model on Dutch clinical reports using a masked language modeling (MLM) objective. It was introduced in [this](#pending) paper.&nbsp;The model was pretrained using domain-specific data (i.e., clinical reports) from scratch. The architecture is the same as [`xlm-roberta-large`](https://huggingface.co/xlm-roberta-large) from HuggingFace. The tokenizer was fitted to the dataset of Dutch medical reports, using the same settings for the tokenizer as [`roberta-base`](https://huggingface.co/FacebookAI/roberta-base). ## Model description RoBERTa is a transformers model that was pretrained on a large corpus of Dutch clinical reports in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way with an automatic process to generate inputs and labels from those texts. This way, the model learns an inner representation of the Dutch medical language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled reports, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model variations Multiple architectures were pretrained for the DRAGON challenge. | Model | #params | Language | |------------------------|--------------------------------|-------| | [`joeranbosma/dragon-bert-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-bert-base-mixed-domain) | 109M | Dutch β†’ Dutch | | [`joeranbosma/dragon-roberta-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-base-mixed-domain) | 278M | Multiple β†’ Dutch | | [`joeranbosma/dragon-roberta-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-large-mixed-domain) | 560M | Multiple β†’ Dutch | | [`joeranbosma/dragon-longformer-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-base-mixed-domain) | 149M | English β†’ Dutch | | [`joeranbosma/dragon-longformer-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-large-mixed-domain) | 435M | English β†’ Dutch | | [`joeranbosma/dragon-bert-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-bert-base-domain-specific) | 109M | Dutch | | [`joeranbosma/dragon-roberta-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-base-domain-specific) | 278M | Dutch | | [`joeranbosma/dragon-roberta-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-large-domain-specific) | 560M | Dutch | | [`joeranbosma/dragon-longformer-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-base-domain-specific) | 149M | Dutch | | [`joeranbosma/dragon-longformer-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-large-domain-specific) | 435M | Dutch | ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole text (e.g., a clinical report) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline("fill-mask", model="joeranbosma/dragon-roberta-large-domain-specific") unmasker("Dit onderzoek geen aanwijzingen voor significant carcinoom. PIRADS <mask>.") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("joeranbosma/dragon-roberta-large-domain-specific") model = AutoModel.from_pretrained("joeranbosma/dragon-roberta-large-domain-specific") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors="pt") output = model(**encoded_input) ``` ## Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data For pretraining, 4,333,201 clinical reports (466,351 consecutive patients) were selected from Ziekenhuisgroep Twente from patients with a diagnostic or interventional visit between 13 July 2000 and 25 April 2023. 180,439 duplicate clinical reports (179,808 patients) were excluded, resulting in 4,152,762 included reports (463,692 patients). These reports were split into training (80%, 3,322,209 reports), validation (10%, 415,276 reports), and testing (10%, 415,277 reports). The testing reports were set aside for future analysis and are not used for pretraining. ## Training procedure ### Pretraining The model was pretrained using masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The HuggingFace implementation was used for pretraining: [`run_mlm.py`](https://github.com/huggingface/transformers/blob/7c6ec195adbfcd22cb6baeee64dd3c24a4b80c74/examples/pytorch/language-modeling/run_mlm.py). ### Pretraining hyperparameters The following hyperparameters were used during pretraining: - `learning_rate`: 1e-4 - `train_batch_size`: 8 - `eval_batch_size`: 8 - `seed`: 42 - `gradient_accumulation_steps`: 32 - `total_train_batch_size`: 256 - `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08 - `lr_scheduler_type`: linear - `num_epochs`: 10.0 - `max_seq_length`: 512 ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3 ## Evaluation results Pending evaluation on the DRAGON benchmark. ### BibTeX entry and citation info ```bibtex @article{PENDING} ```
{"license": "cc-by-nc-sa-4.0"}
joeranbosma/dragon-roberta-large-domain-specific
null
[ "transformers", "pytorch", "roberta", "fill-mask", "doi:10.57967/hf/2171", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:41:28+00:00
fill-mask
transformers
# DRAGON Longformer base domain-specific Pretrained model on Dutch clinical reports using a masked language modeling (MLM) objective. It was introduced in [this](#pending) paper.&nbsp;The model was pretrained using domain-specific data (i.e., clinical reports) from scratch. The architecture is the same as [`allenai/longformer-base-4096`](https://huggingface.co/allenai/longformer-base-4096) from HuggingFace. The tokenizer was fitted to the dataset of Dutch medical reports, using the same settings for the tokenizer as [`roberta-base`](https://huggingface.co/FacebookAI/roberta-base). ## Model description Longformer is a transformers model that was pretrained on a large corpus of Dutch clinical reports in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way with an automatic process to generate inputs and labels from those texts. This way, the model learns an inner representation of the Dutch medical language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled reports, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model variations Multiple architectures were pretrained for the DRAGON challenge. | Model | #params | Language | |------------------------|--------------------------------|-------| | [`joeranbosma/dragon-bert-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-bert-base-mixed-domain) | 109M | Dutch β†’ Dutch | | [`joeranbosma/dragon-roberta-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-base-mixed-domain) | 278M | Multiple β†’ Dutch | | [`joeranbosma/dragon-roberta-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-large-mixed-domain) | 560M | Multiple β†’ Dutch | | [`joeranbosma/dragon-longformer-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-base-mixed-domain) | 149M | English β†’ Dutch | | [`joeranbosma/dragon-longformer-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-large-mixed-domain) | 435M | English β†’ Dutch | | [`joeranbosma/dragon-bert-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-bert-base-domain-specific) | 109M | Dutch | | [`joeranbosma/dragon-roberta-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-base-domain-specific) | 278M | Dutch | | [`joeranbosma/dragon-roberta-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-large-domain-specific) | 560M | Dutch | | [`joeranbosma/dragon-longformer-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-base-domain-specific) | 149M | Dutch | | [`joeranbosma/dragon-longformer-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-large-domain-specific) | 435M | Dutch | ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole text (e.g., a clinical report) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline("fill-mask", model="joeranbosma/dragon-longformer-base-domain-specific") unmasker("Dit onderzoek geen aanwijzingen voor significant carcinoom. PIRADS <mask>.") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("joeranbosma/dragon-longformer-base-domain-specific") model = AutoModel.from_pretrained("joeranbosma/dragon-longformer-base-domain-specific") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors="pt") output = model(**encoded_input) ``` ## Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data For pretraining, 4,333,201 clinical reports (466,351 consecutive patients) were selected from Ziekenhuisgroep Twente from patients with a diagnostic or interventional visit between 13 July 2000 and 25 April 2023. 180,439 duplicate clinical reports (179,808 patients) were excluded, resulting in 4,152,762 included reports (463,692 patients). These reports were split into training (80%, 3,322,209 reports), validation (10%, 415,276 reports), and testing (10%, 415,277 reports). The testing reports were set aside for future analysis and are not used for pretraining. ## Training procedure ### Pretraining The model was pretrained using masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The HuggingFace implementation was used for pretraining: [`run_mlm.py`](https://github.com/huggingface/transformers/blob/7c6ec195adbfcd22cb6baeee64dd3c24a4b80c74/examples/pytorch/language-modeling/run_mlm.py). ### Pretraining hyperparameters The following hyperparameters were used during pretraining: - `learning_rate`: 6e-4 - `train_batch_size`: 16 - `eval_batch_size`: 16 - `seed`: 42 - `gradient_accumulation_steps`: 16 - `total_train_batch_size`: 256 - `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08 - `lr_scheduler_type`: linear - `num_epochs`: 10.0 - `max_seq_length`: 4096 ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3 ## Evaluation results Pending evaluation on the DRAGON benchmark. ### BibTeX entry and citation info ```bibtex @article{PENDING} ```
{"license": "cc-by-nc-sa-4.0"}
joeranbosma/dragon-longformer-base-domain-specific
null
[ "transformers", "pytorch", "longformer", "fill-mask", "doi:10.57967/hf/2173", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:41:30+00:00
fill-mask
transformers
# DRAGON Longformer large domain-specific Pretrained model on Dutch clinical reports using a masked language modeling (MLM) objective. It was introduced in [this](#pending) paper.&nbsp;The model was pretrained using domain-specific data (i.e., clinical reports) from scratch. The architecture is the same as [`allenai/longformer-large-4096`](https://huggingface.co/allenai/longformer-large-4096) from HuggingFace. The tokenizer was fitted to the dataset of Dutch medical reports, using the same settings for the tokenizer as [`roberta-base`](https://huggingface.co/FacebookAI/roberta-base). ## Model description Longformer is a transformers model that was pretrained on a large corpus of Dutch clinical reports in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labeling them in any way with an automatic process to generate inputs and labels from those texts. This way, the model learns an inner representation of the Dutch medical language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled reports, for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Model variations Multiple architectures were pretrained for the DRAGON challenge. | Model | #params | Language | |------------------------|--------------------------------|-------| | [`joeranbosma/dragon-bert-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-bert-base-mixed-domain) | 109M | Dutch β†’ Dutch | | [`joeranbosma/dragon-roberta-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-base-mixed-domain) | 278M | Multiple β†’ Dutch | | [`joeranbosma/dragon-roberta-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-roberta-large-mixed-domain) | 560M | Multiple β†’ Dutch | | [`joeranbosma/dragon-longformer-base-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-base-mixed-domain) | 149M | English β†’ Dutch | | [`joeranbosma/dragon-longformer-large-mixed-domain`](https://huggingface.co/joeranbosma/dragon-longformer-large-mixed-domain) | 435M | English β†’ Dutch | | [`joeranbosma/dragon-bert-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-bert-base-domain-specific) | 109M | Dutch | | [`joeranbosma/dragon-roberta-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-base-domain-specific) | 278M | Dutch | | [`joeranbosma/dragon-roberta-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-roberta-large-domain-specific) | 560M | Dutch | | [`joeranbosma/dragon-longformer-base-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-base-domain-specific) | 149M | Dutch | | [`joeranbosma/dragon-longformer-large-domain-specific`](https://huggingface.co/joeranbosma/dragon-longformer-large-domain-specific) | 435M | Dutch | ## Intended uses & limitations You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole text (e.g., a clinical report) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ## How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import pipeline unmasker = pipeline("fill-mask", model="joeranbosma/dragon-longformer-large-domain-specific") unmasker("Dit onderzoek geen aanwijzingen voor significant carcinoom. PIRADS <mask>.") ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("joeranbosma/dragon-longformer-large-domain-specific") model = AutoModel.from_pretrained("joeranbosma/dragon-longformer-large-domain-specific") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors="pt") output = model(**encoded_input) ``` ## Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. ## Training data For pretraining, 4,333,201 clinical reports (466,351 consecutive patients) were selected from Ziekenhuisgroep Twente from patients with a diagnostic or interventional visit between 13 July 2000 and 25 April 2023. 180,439 duplicate clinical reports (179,808 patients) were excluded, resulting in 4,152,762 included reports (463,692 patients). These reports were split into training (80%, 3,322,209 reports), validation (10%, 415,276 reports), and testing (10%, 415,277 reports). The testing reports were set aside for future analysis and are not used for pretraining. ## Training procedure ### Pretraining The model was pretrained using masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then runs the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally masks the future tokens. It allows the model to learn a bidirectional representation of the sentence. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. The HuggingFace implementation was used for pretraining: [`run_mlm.py`](https://github.com/huggingface/transformers/blob/7c6ec195adbfcd22cb6baeee64dd3c24a4b80c74/examples/pytorch/language-modeling/run_mlm.py). ### Pretraining hyperparameters The following hyperparameters were used during pretraining: - `learning_rate`: 1e-4 - `train_batch_size`: 4 - `eval_batch_size`: 4 - `seed`: 42 - `gradient_accumulation_steps`: 64 - `total_train_batch_size`: 256 - `optimizer`: Adam with betas=(0.9,0.999) and epsilon=1e-08 - `lr_scheduler_type`: linear - `num_epochs`: 10.0 - `max_seq_length`: 4096 ### Framework versions - Transformers 4.29.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3 ## Evaluation results Pending evaluation on the DRAGON benchmark. ### BibTeX entry and citation info ```bibtex @article{PENDING} ```
{"license": "cc-by-nc-sa-4.0"}
joeranbosma/dragon-longformer-large-domain-specific
null
[ "transformers", "pytorch", "longformer", "fill-mask", "doi:10.57967/hf/2175", "license:cc-by-nc-sa-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:41:32+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/dta0jvx
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:42:25+00:00
null
null
¿Qué es Candid-ex Pastillas? Candid-ex tabletas es una innovadora cÑpsula para bajar de peso formulada con una mezcla única de ingredientes naturales elegidos meticulosamente por sus potentes propiedades para quemar grasa. Elaborado por expertos en el campo de la nutrición y el bienestar, este suplemento estÑ diseñado para ayudar a las personas en su camino hacia un cuerpo mÑs sano y delgado. PÑgina web oficial:<a href="https://www.nutritionsee.com/candeexgs">www.Candid-ex.com</a> <p><a href="https://www.nutritionsee.com/candeexgs"> <img src="https://www.nutritionsee.com/wp-content/uploads/2024/05/Candid-ex-Mexico-1.png" alt="enter image description here"> </a></p> <a href="https://www.nutritionsee.com/candeexgs">‘‘Comprar ahora!! Haga clic en el enlace a continuación para obtener mÑs información y obtener un 50% de descuento ahora... ‘Date prisa!</a> PÑgina web oficial:<a href="https://www.nutritionsee.com/candeexgs">www.Candid-ex.com</a>
{"license": "apache-2.0"}
Candid-exMexico/Candid-ex
null
[ "license:apache-2.0", "region:us" ]
null
2024-05-03T09:43:45+00:00
feature-extraction
sentence-transformers
The model is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for the following use case: This model is designed to support various applications in natural language processing and understanding. ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started: ```python from transformers import AutoModel, AutoTokenizer llm_name = "jina-embeddings-v2-base-en-03052024-im2p-webapp" tokenizer = AutoTokenizer.from_pretrained(llm_name) model = AutoModel.from_pretrained(llm_name, trust_remote_code=True) tokens = tokenizer("Your text here", return_tensors="pt") embedding = model(**tokens) ```
{"language": ["en", "en", "en", "en", "en", "en", "en"], "license": "mit", "tags": ["sentence-transformers", "PyTorch", "Core ML", "ONNX", "allenai/c4", "sentence-similarity", "feature-extraction", "Toys", "Children", "Games", "Educational", "Entertainment"], "datasets": ["fine-tuned/jina-embeddings-v2-base-en-03052024-im2p-webapp"], "pipeline_tag": "feature-extraction"}
fine-tuned/jina-embeddings-v2-base-en-03052024-im2p-webapp
null
[ "sentence-transformers", "safetensors", "bert", "PyTorch", "Core ML", "ONNX", "allenai/c4", "sentence-similarity", "feature-extraction", "Toys", "Children", "Games", "Educational", "Entertainment", "custom_code", "en", "dataset:fine-tuned/jina-embeddings-v2-base-en-03052024-im2p-webapp", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:44:33+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
aho-tai/test
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:45:00+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.1
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
pechaut/Mistral-7b-bridge-v0.1
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "region:us" ]
null
2024-05-03T09:45:07+00:00
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="cogni-kai/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
cogni-kai/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-05-03T09:45:12+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bengali_news_article_summarization_mt5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 20 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 160 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 100 - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.99 | 83 | 0.8963 | | No log | 2.0 | 167 | 0.3201 | | 9.149 | 2.99 | 250 | 0.2583 | | 9.149 | 3.99 | 334 | 0.2372 | | 0.3009 | 5.0 | 418 | 0.2298 | | 0.3009 | 5.99 | 501 | 0.2244 | | 0.3009 | 7.0 | 585 | 0.2213 | | 0.2524 | 8.0 | 669 | 0.2163 | | 0.2524 | 8.99 | 752 | 0.2136 | | 0.2306 | 10.0 | 836 | 0.2126 | | 0.2306 | 10.99 | 919 | 0.2117 | | 0.2176 | 11.99 | 1003 | 0.2120 | | 0.2176 | 13.0 | 1087 | 0.2116 | | 0.2176 | 13.99 | 1170 | 0.2111 | | 0.2119 | 14.89 | 1245 | 0.2111 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/mt5-small", "model-index": [{"name": "bengali_news_article_summarization_mt5", "results": []}]}
fahad1770/bengali_news_article_summarization_mt5
null
[ "transformers", "safetensors", "mt5", "text2text-generation", "generated_from_trainer", "base_model:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:45:23+00:00
text-generation
transformers
Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of [meta-llama/Meta-Llama-3-70B-Instruct ](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct). For this quantization, we used 1 codebook of 16 bits. Results (in progress): | Model | Quantization | Model size, Gb | |------|------|------| |meta-llama/Meta-Llama-3-70B-Instruct | - | 141.2 | | | 1x16 | 21.9 |
{"library_name": "transformers", "tags": ["llama", "facebook", "meta", "llama-3", "conversational", "text-generation-inference"]}
ISTA-DASLab/Meta-Llama-3-70B-Instruct-AQLM-2Bit-1x16
null
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "llama-3", "conversational", "text-generation-inference", "arxiv:2401.06118", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-05-03T09:45:59+00:00
null
null
{}
neeraj0022/bhasa_ft
null
[ "region:us" ]
null
2024-05-03T09:46:32+00:00
null
null
{"license": "mit"}
LeroyAngelo/DigitalProcessInnovation
null
[ "license:mit", "region:us" ]
null
2024-05-03T09:46:36+00:00
other
transformers
# Chronos-T5 (Tiny) Chronos is a family of **pretrained time series forecasting models** based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes. For details on Chronos models, training data and procedures, and experimental results, please refer to the paper [Chronos: Learning the Language of Time Series](https://arxiv.org/abs/2403.07815). <p align="center"> <img src="figures/main-figure.png" width="100%"> <br /> <span> Fig. 1: High-level depiction of Chronos. (<b>Left</b>) The input time series is scaled and quantized to obtain a sequence of tokens. (<b>Center</b>) The tokens are fed into a language model which may either be an encoder-decoder or a decoder-only model. The model is trained using the cross-entropy loss. (<b>Right</b>) During inference, we autoregressively sample tokens from the model and map them back to numerical values. Multiple trajectories are sampled to obtain a predictive distribution. </span> </p> --- ## Architecture The models in this repository are based on the [T5 architecture](https://arxiv.org/abs/1910.10683). The only difference is in the vocabulary size: Chronos-T5 models use 4096 different tokens, compared to 32128 of the original T5 models, resulting in fewer parameters. | Model | Parameters | Based on | | ---------------------------------------------------------------------- | ---------- | ---------------------------------------------------------------------- | | [**chronos-t5-tiny**](https://huggingface.co/amazon/chronos-t5-tiny) | 8M | [t5-efficient-tiny](https://huggingface.co/google/t5-efficient-tiny) | | [**chronos-t5-mini**](https://huggingface.co/amazon/chronos-t5-mini) | 20M | [t5-efficient-mini](https://huggingface.co/google/t5-efficient-mini) | | [**chronos-t5-small**](https://huggingface.co/amazon/chronos-t5-small) | 46M | [t5-efficient-small](https://huggingface.co/google/t5-efficient-small) | | [**chronos-t5-base**](https://huggingface.co/amazon/chronos-t5-base) | 200M | [t5-efficient-base](https://huggingface.co/google/t5-efficient-base) | | [**chronos-t5-large**](https://huggingface.co/amazon/chronos-t5-large) | 710M | [t5-efficient-large](https://huggingface.co/google/t5-efficient-large) | ## Usage To perform inference with Chronos models, install the package in the GitHub [companion repo](https://github.com/amazon-science/chronos-forecasting) by running: ``` pip install git+https://github.com/amazon-science/chronos-forecasting.git ``` A minimal example showing how to perform inference using Chronos models: ```python import matplotlib.pyplot as plt import numpy as np import pandas as pd import torch from chronos import ChronosPipeline pipeline = ChronosPipeline.from_pretrained( "amazon/chronos-t5-tiny", device_map="cuda", torch_dtype=torch.bfloat16, ) df = pd.read_csv("https://raw.githubusercontent.com/AileenNielsen/TimeSeriesAnalysisWithPython/master/data/AirPassengers.csv") # context must be either a 1D tensor, a list of 1D tensors, # or a left-padded 2D tensor with batch as the first dimension context = torch.tensor(df["#Passengers"]) prediction_length = 12 forecast = pipeline.predict(context, prediction_length) # shape [num_series, num_samples, prediction_length] # visualize the forecast forecast_index = range(len(df), len(df) + prediction_length) low, median, high = np.quantile(forecast[0].numpy(), [0.1, 0.5, 0.9], axis=0) plt.figure(figsize=(8, 4)) plt.plot(df["#Passengers"], color="royalblue", label="historical data") plt.plot(forecast_index, median, color="tomato", label="median forecast") plt.fill_between(forecast_index, low, high, color="tomato", alpha=0.3, label="80% prediction interval") plt.legend() plt.grid() plt.show() ``` ## Citation If you find Chronos models useful for your research, please consider citing the associated [paper](https://arxiv.org/abs/2403.07815): ``` @article{ansari2024chronos, author = {Ansari, Abdul Fatir and Stella, Lorenzo and Turkmen, Caner and Zhang, Xiyuan, and Mercado, Pedro and Shen, Huibin and Shchur, Oleksandr and Rangapuram, Syama Syndar and Pineda Arango, Sebastian and Kapoor, Shubham and Zschiegner, Jasper and Maddix, Danielle C. and Mahoney, Michael W. and Torkkola, Kari and Gordon Wilson, Andrew and Bohlke-Schneider, Michael and Wang, Yuyang}, title = {Chronos: Learning the Language of Time Series}, journal = {arXiv preprint arXiv:2403.07815}, year = {2024} } ``` ## Security See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information. ## License This project is licensed under the Apache-2.0 License.
{"license": "apache-2.0", "tags": ["time series", "forecasting", "pretrained models", "foundation models", "time series foundation models", "time-series"], "pipeline_tag": "other"}
shchuro/chronos-t5-tiny-deploy
null
[ "transformers", "safetensors", "t5", "text2text-generation", "time series", "forecasting", "pretrained models", "foundation models", "time series foundation models", "time-series", "other", "arxiv:2403.07815", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:47:16+00:00
text-generation
transformers
{}
ummagumm-a/puzzle4
null
[ "transformers", "gpt_neox", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:47:32+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
avemio-digital/llama3_entity_extraction_category_adapter_merge
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:47:42+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
andreidima/Mistral-7B-v0.1-Romanian-qlora
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-05-03T09:48:17+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
golf2248/pwyek3f
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:48:20+00:00
text-generation
transformers
{}
pechaut/Mistral-7b-bridge
null
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-05-03T09:48:40+00:00