Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-generation
transformers
{}
mPLUG/TinyChart-3B-768
null
[ "transformers", "safetensors", "tiny_chart_phi", "text-generation", "custom_code", "autotrain_compatible", "endpoints_compatible", "region:us", "has_space" ]
null
2024-04-26T15:38:31+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat This model is a fine-tuned version of [sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat](https://huggingface.co/sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.6412 - Rewards/chosen: -0.1044 - Rewards/rejected: -0.2494 - Rewards/accuracies: 0.6445 - Rewards/margins: 0.1450 - Logps/rejected: -429.4582 - Logps/chosen: -433.6304 - Logits/rejected: -3.2047 - Logits/chosen: -3.2544 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 128 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.6795 | 0.2092 | 100 | 0.6759 | 0.0017 | -0.0328 | 0.6289 | 0.0345 | -407.8037 | -423.0197 | -3.2565 | -3.3136 | | 0.6584 | 0.4184 | 200 | 0.6534 | -0.0666 | -0.1617 | 0.6445 | 0.0951 | -420.6952 | -429.8561 | -3.2240 | -3.2768 | | 0.6494 | 0.6276 | 300 | 0.6438 | -0.1077 | -0.2410 | 0.6211 | 0.1333 | -428.6237 | -433.9640 | -3.2050 | -3.2553 | | 0.6428 | 0.8368 | 400 | 0.6415 | -0.1001 | -0.2437 | 0.6211 | 0.1436 | -428.8884 | -433.2000 | -3.2046 | -3.2543 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["alignment-handbook", "trl", "dpo", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat", "model-index": [{"name": "sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat", "results": []}]}
sanchit-gandhi/distil-zephyr-1.5b-dpo-ultrafeedback
null
[ "transformers", "tensorboard", "safetensors", "mistral", "text-generation", "alignment-handbook", "trl", "dpo", "generated_from_trainer", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:sanchit-gandhi/distil-zephyr-1.5b-ssft-ultrachat", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T15:39:14+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/pklpgr5
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T15:41:29+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/9r90cb0
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T15:41:29+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/w11spll
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T15:41:30+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
pruning/kc4xp19
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T15:41:30+00:00
null
null
{}
monotykamary/snowflake-arctic-embed-l-onnx
null
[ "region:us" ]
null
2024-04-26T15:42:19+00:00
text-to-image
diffusers
# AutoTrain SDXL LoRA DreamBooth - ghadaaa/dyslexia <Gallery /> ## Model description These are ghadaaa/gen_images LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use taswira mezyena to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](ghadaaa/dyslexia/tree/main) them in the Files & versions tab. ---
{"license": "openrail++", "library_name": "diffusers", "tags": ["autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "pipeline_tag": "text-to-image", "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "taswira mezyena"}
ghadaaa/dyslexia
null
[ "diffusers", "autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-26T15:43:31+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
theGhoul21/SRL-TPAS-v1-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T15:45:34+00:00
null
null
{"license": "apache-2.0"}
aminul11/deepfake-detection
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-26T15:47:42+00:00
text-generation
transformers
{"license": "apache-2.0"}
QuantFactory/Maiden-Unquirked-20B-4.25bpw-h8-exl2
null
[ "transformers", "safetensors", "llama", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T15:49:15+00:00
null
null
{}
manwithlongdick/jinday
null
[ "region:us" ]
null
2024-04-26T15:52:04+00:00
null
null
{}
wwgd/mistral2_7b_instruct_adapter
null
[ "safetensors", "region:us" ]
null
2024-04-26T15:53:11+00:00
text-generation
transformers
{}
tosh97/compliant-reasons-70k
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T15:53:39+00:00
null
transformers
{"license": "llama3"}
mrzidd/trtllm-llama-3-8B-fp16
null
[ "transformers", "license:llama3", "endpoints_compatible", "region:us" ]
null
2024-04-26T15:53:41+00:00
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # results-Meta-Llama-3-8B-qlora-translation-no-tag This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9866 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.9781 | 0.2001 | 1572 | 2.0535 | | 2.0743 | 0.4002 | 3144 | 2.0221 | | 1.7933 | 0.6003 | 4716 | 2.0003 | | 1.7868 | 0.8004 | 6288 | 1.9866 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.1 - Pytorch 2.2.1 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "other", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Meta-Llama-3-8B", "model-index": [{"name": "results-Meta-Llama-3-8B-qlora-translation-no-tag", "results": []}]}
AlienKevin/Meta-Llama-3-8B-qlora-translation-no-tag
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:meta-llama/Meta-Llama-3-8B", "license:other", "region:us" ]
null
2024-04-26T15:53:57+00:00
null
null
{}
Utsav2001/Temp
null
[ "region:us" ]
null
2024-04-26T15:55:43+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
martinsinnona/visdecode_2024_5
null
[ "transformers", "safetensors", "pix2struct", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T15:56:30+00:00
null
transformers
# Uploaded model - **Developed by:** ake178178 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
ake178178/llama8bUnslothdemo
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-26T15:58:49+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Zardian/Cyber_assist3.3
null
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:00:01+00:00
null
null
{}
krilliam/repo_name
null
[ "region:us" ]
null
2024-04-26T16:00:11+00:00
token-classification
span-marker
# SpanMarker KtZH StaZH This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model that is based on the [GELECTRA Large](https://huggingface.co/stefan-it/span-marker-gelectra-large-germeval14) variant of the **SpanMarker for GermEval 2014 NER** and further fine-tuned on meeting notes from the cantonal council, resolutions of the governing council and law text from the corpus juris of the Canton of Zurich. The documents span the 19th and 20th century, covering both historical language with varying degrees of standardization and contemporary language. Distinguished are `PER`son, `LOC`ation, `ORG`anisation, as well as derivations of Named Entities (tag suffix `-deriv`). The `ORG`anisation class has been extended to encompass institutions that have been deemed to be reasonably unambiguous in isolation or by virtue of their usage in the training data. Purely abstract/prototypical uses of institutions are generally out of scope (the model does not perform concept classification), can however occasionally arise. ## Usage The fine-tuned model can be used like: ```python from span_marker import SpanMarkerModel # Download from the 🤗 Hub model = SpanMarkerModel.from_pretrained("team-data-ktzh/span-marker-ktzh-stazh") # Run inference entities = model.predict("Hans Meier aus Dielsdorf vertritt im Kantonsrat die FDP.") ``` ## Model Details ### Model Description - **Model Type:** SpanMarker - **Encoder:** [deepset/gelectra-large](https://huggingface.co/deepset/gelectra-large) (ELECTRA Large) - **Maximum Sequence Length:** 256 tokens - **Maximum Entity Length:** 8 words - **Language:** de - **License:** MIT ### Model Sources - **Training data:** [GitHub](https://github.com/machinelearningZH/named-entity-recognition_staatsarchiv/tree/main/data/training_data) - **SpanMarker:** [GitHub](https://github.com/tomaarsen/SpanMarkerNER) ### Model Labels | Label | Examples | |:------|:------------------------------------------------------------------------------------------------------| | PER | Hans Müller | | LOC | Zürich | | ORG | SBB, Swissair, Kantonsrat, Bundesgericht | | PERderiv | Müllersche | | LOCderiv | zürcherische | | ORGderiv | bundesgerichtlicher | ## Cross-validation evaluation Evaluation relies on SpanMarker's internal evaluation code, which is based on `seqeval`. ### Average per-label metrics | Label | P | R | F1 | |:---------|-----:|-----:|-----:| | PER | 0.97 | 0.97 | 0.97 | | LOC | 0.95 | 0.96 | 0.96 | | ORG | 0.92 | 0.95 | 0.93 | | PERderiv | 0.40 | 0.30 | 0.33 | | LOCderiv | 0.86 | 0.85 | 0.85 | | ORGderiv | 0.73 | 0.76 | 0.74 | ### Overall per-fold validation metrics | Fold | Precision | Recall | F1 | Accuracy | |:-----:|:---------------------:|:------------------:|:-------------:|:-------------------:| | 0 | 0.927 | 0.952 | 0.939 | 0.992 | | 1 | 0.942 | 0.957 | 0.949 | 0.993 | | 2 | 0.938 | 0.946 | 0.942 | 0.992 | | 3 | 0.921 | 0.951 | 0.936 | 0.992 | | 4 | 0.945 | 0.949 | 0.947 | 0.993 | ### Confusion matrix ![Confusion matrix](confusion_matrix.png) (Note that the confusion matrix also lists other labels from the GermEval 2014 dataset which are ignored in the context of this model.) ## Bias, Risks and Limitations Please note that this is released strictly as a task-bound model for the purpose of annotating historical and future documents from the collections it was trained on, as well as the official gazette of the Canton of Zurich. No claims of generalization are made outside of the specific use case it was developed for. The training data was annotated according to a specific but informal annotation scheme and the bias of the original model has been retained where it was found not to interfere with the use case. Be mindful of idiosyncrasies when applying to other documents. ### Recommendations The original XML documents of the training set can be found [here](https://github.com/machinelearningZH/named-entity-recognition_staatsarchiv/tree/main/data/training_data). The annotations may be freely modified to tailor the model to an alternative use case. Note that [a modified TEI Publisher](https://github.com/machinelearningZH/named-entity-recognition_staatsarchiv/tree/main/ner_tei-publisher-app) and [this Jupyter notebook](https://github.com/machinelearningZH/named-entity-recognition_staatsarchiv/tree/main/notebooks/get_training_data) are required to generate a Huggingface Dataset. ## Training Details ### Training Hyperparameters - learning_rate: Decay from 1e-05 to 5e-07 - train_batch_size: 4 - seed: 42 - optimizer: AdamW with betas=(0.9,0.999), epsilon=1e-08, weight_decay=0.01 - lr_scheduler_type: Polynomial (cubic) - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 10 - gradient_accumulation_steps: 2 - steps: 16000 ## Training data sources The training data was sampled from the following collections from the [data catalog of the Canton of Zurich](https://www.zh.ch/de/politik-staat/statistik-daten/datenkatalog.html#/), curated by the [Staatsarchiv des Kantons Zürich (state archives of the Canton of Zurich)](https://www.zh.ch/de/direktion-der-justiz-und-des-innern/staatsarchiv.html): * **Meeting notes of the cantonal council**: [Zürcher Kantonsratsprotokolle des 19. und 20. Jahrhunderts](https://www.zh.ch/de/politik-staat/statistik-daten/datenkatalog.html#/datasets/732@staatsarchiv-kanton-zuerich) * **Resolutions of the governing council**: [Zürcher Regierungsratsbeschlüsse des 19. und 20. Jahrhunderts](https://www.zh.ch/de/politik-staat/statistik-daten/datenkatalog.html#/datasets/466@staatsarchiv-kanton-zuerich) * **Corpus juris**: [Erlasse der Zürcher Gesetzessammlung ab 1803](https://www.zh.ch/de/politik-staat/statistik-daten/datenkatalog.html#/datasets/712@staatsarchiv-kanton-zuerich) ## Bibliography This work builds upon: ``` @software{Aarsen_SpanMarker, author = {Aarsen, Tom}, license = {Apache-2.0}, title = {{SpanMarker for Named Entity Recognition}}, url = {https://github.com/tomaarsen/SpanMarkerNER} } @article{aarsenspanmarker, title={SpanMarker for Named Entity Recognition}, author={Aarsen, Tom and del Prado Martin, Fermin Moscoso and Suero, Daniel Vila and Oosterhuis, Harrie} } @inproceedings{ye-etal-2022-packed, title = "Packed Levitated Marker for Entity and Relation Extraction", author = "Ye, Deming and Lin, Yankai and Li, Peng and Sun, Maosong", editor = "Muresan, Smaranda and Nakov, Preslav and Villavicencio, Aline", booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.acl-long.337", doi = "10.18653/v1/2022.acl-long.337", pages = "4904--4917"}", } @misc{chan2020germans, author = {Chan, Branden and Schweter, Stefan and Möller, Timo}, description = {German's Next Language Model}, keywords = {bert gbert languagemodel lm}, title = {German's Next Language Model}, url = {http://arxiv.org/abs/2010.10906}, year = 2020 } @inproceedings{benikova-etal-2014-nosta, title = {NoSta-D Named Entity Annotation for German: Guidelines and Dataset}, author = {Benikova, Darina and Biemann, Chris and Reznicek, Marc}, booktitle = {Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}'14)}, month = {may}, year = {2014}, address = {Reykjavik, Iceland}, publisher = {European Language Resources Association (ELRA)}, url = {http://www.lrec-conf.org/proceedings/lrec2014/pdf/276_Paper.pdf}, pages = {2524--2531}, } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
{"language": ["de"], "license": "mit", "library_name": "span-marker", "tags": ["span-marker", "token-classification", "ner", "named-entity-recognition"], "base_model": "stefan-it/span-marker-gelectra-large-germeval14", "pipeline_tag": "token-classification", "widget": [{"text": "Konstruiertes Beispiel: Hans Meier besitzt eine Firma im z\u00fcrcherischen Wil. Die Meier AG war Thema einer kantonsr\u00e4tlichen Sitzung. Im Meierschen Besitz ist auch ein Anwesen, das sich in unmittelbarer Nachbarschaft zu Liegenschaften des Kantons befindet und unweit der Grenze zum Kanton Aargau gelegen ist.", "example_title": "Test sentence with all labels"}, {"text": "B\u00f6ckli - Z\u00fcrich erkl\u00e4rt, da\u00df die Sozialdemokratische Fraktion bei allem Verst\u00e4ndnis f\u00fcr die Kritik Winigers der Vorlage mit gro\u00dfer Mehrheit zustimmt. Das ungeschickte Vorgehen der \u00abSwissair\u00bb bei der Kapitalerh\u00f6hung ist zu bedauern. Es war ungl\u00fccklich, da\u00df keine \u00f6ffentliche Auflage der neuen Aktien durchgef\u00fchrt wurde. F\u00fcr die Stellungnahme der Z\u00fcrcher Kantonalbank war ausschlaggebend, da\u00df das Bankgesetz solche Beteiligungen ablehnt.", "example_title": "Cantonal council meeting notes"}]}
team-data-ktzh/span-marker-ktzh-stazh
null
[ "span-marker", "safetensors", "token-classification", "ner", "named-entity-recognition", "de", "arxiv:2010.10906", "base_model:stefan-it/span-marker-gelectra-large-germeval14", "license:mit", "region:us" ]
null
2024-04-26T16:00:31+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ntnhan/gemma-7b-1.1-it-MetaQA40K
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:01:51+00:00
null
null
{"license": "openrail"}
weillon/iwiwiwiw
null
[ "license:openrail", "region:us" ]
null
2024-04-26T16:02:23+00:00
null
null
{"license": "openrail"}
weillon/VFERNANDES
null
[ "license:openrail", "region:us" ]
null
2024-04-26T16:03:50+00:00
text-generation
transformers
{}
LuangMV97/Roberta-GPT2_EmpAI_EOS
null
[ "transformers", "tensorboard", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:03:55+00:00
null
null
{}
kuleshov-group/pt-mamba-seqlen-131k_d_model-256_n_layer-16_lr-8e-3
null
[ "region:us" ]
null
2024-04-26T16:03:59+00:00
null
null
{}
Brunapupo/teledemarto-model
null
[ "region:us" ]
null
2024-04-26T16:04:30+00:00
fill-mask
transformers
{}
gabrielbianchin/base_esm2_t6_long
null
[ "transformers", "safetensors", "esm", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:05:24+00:00
fill-mask
transformers
{}
gabrielbianchin/base_esm2_t12_long
null
[ "transformers", "safetensors", "esm", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:06:00+00:00
null
null
{"license": "apache-2.0"}
AlekseyScorpi/saiga_mistral_7b_vacancies_GGUF
null
[ "gguf", "license:apache-2.0", "region:us" ]
null
2024-04-26T16:06:04+00:00
fill-mask
transformers
{}
gabrielbianchin/base_esm2_t30_long
null
[ "transformers", "safetensors", "esm", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:06:12+00:00
fill-mask
transformers
{}
gabrielbianchin/base_esm2_t33_long
null
[ "transformers", "safetensors", "esm", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:06:31+00:00
fill-mask
transformers
{}
gabrielbianchin/base_esm2_t36_long
null
[ "transformers", "pytorch", "esm", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:06:46+00:00
null
null
{}
legal-pythia/bart_financial
null
[ "safetensors", "region:us" ]
null
2024-04-26T16:06:49+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # temp123 This model is a fine-tuned version of [cointegrated/rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2633 - Accuracy: 0.497 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 256 - eval_batch_size: 256 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 211 | 1.6566 | 0.349 | | No log | 2.0 | 422 | 1.4916 | 0.4083 | | 1.635 | 3.0 | 633 | 1.4112 | 0.4388 | | 1.635 | 4.0 | 844 | 1.3611 | 0.4613 | | 1.3947 | 5.0 | 1055 | 1.3261 | 0.4688 | | 1.3947 | 6.0 | 1266 | 1.3025 | 0.478 | | 1.3947 | 7.0 | 1477 | 1.2838 | 0.4897 | | 1.2985 | 8.0 | 1688 | 1.2726 | 0.4932 | | 1.2985 | 9.0 | 1899 | 1.2658 | 0.4972 | | 1.2534 | 10.0 | 2110 | 1.2633 | 0.497 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "cointegrated/rubert-tiny2", "model-index": [{"name": "temp123", "results": []}]}
IvashinMaxim/temp123
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:cointegrated/rubert-tiny2", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:07:06+00:00
null
null
{}
gabrielbianchin/esm2_t6_long_int4
null
[ "safetensors", "region:us" ]
null
2024-04-26T16:07:15+00:00
null
null
{}
gabrielbianchin/esm2_t12_long_int4
null
[ "safetensors", "region:us" ]
null
2024-04-26T16:07:22+00:00
null
null
{}
gabrielbianchin/esm2_t30_long_int4
null
[ "safetensors", "region:us" ]
null
2024-04-26T16:07:39+00:00
null
null
{}
gabrielbianchin/esm2_t33_long_int4
null
[ "safetensors", "region:us" ]
null
2024-04-26T16:07:48+00:00
null
null
{}
gabrielbianchin/esm2_t36_long_int4
null
[ "safetensors", "region:us" ]
null
2024-04-26T16:07:56+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
antonyo94/Enlighten_Instruct_merged
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:08:16+00:00
null
transformers
{}
jayasuryajsk/Llama-3-long
null
[ "transformers", "gguf", "llama", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:10:47+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # griffin-c3t-8L-v0.02-fineweb Pretraining experiment with griffin/recurrent_gemma arch ## Model description Further training of [pszemraj/griffin-v0.01-c3t-8layer-simplewiki-silu](https://hf.co/pszemraj/griffin-v0.01-c3t-8layer-simplewiki-silu) on the BEE-spoke-data/fineweb-1M_en-med dataset. It achieves the following results on the evaluation set: - Loss: 5.1888 - Accuracy: 0.2326 - Num Input Tokens Seen: 798621696 ## numbers tl;dr its bad/would need more training: hf (pretrained=pszemraj/griffin-c3t-8L-v0.02-fineweb,trust_remote_code=True,dtype=float), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 4 | Tasks |Version|Filter|n-shot| Metric | Value | | Stderr | |--------------|------:|------|-----:|----------|----------:|---|---------:| |winogrande | 1|none | 0|acc | 0.5146|± | 0.0140| |piqa | 1|none | 0|acc | 0.5511|± | 0.0116| | | |none | 0|acc_norm | 0.5261|± | 0.0116| |openbookqa | 1|none | 0|acc | 0.1140|± | 0.0142| | | |none | 0|acc_norm | 0.2240|± | 0.0187| |lambada_openai| 1|none | 0|perplexity|209503.2246|± |11711.4041| | | |none | 0|acc | 0.0000|± | 0.0000| |boolq | 2|none | 0|acc | 0.3783|± | 0.0085| |arc_easy | 1|none | 0|acc | 0.2593|± | 0.0090| | | |none | 0|acc_norm | 0.2774|± | 0.0092| ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 2 - seed: 80085 - gradient_accumulation_steps: 32 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Input Tokens Seen | |:-------------:|:------:|:----:|:---------------:|:--------:|:-----------------:| | 6.0703 | 0.0656 | 400 | 6.2332 | 0.1701 | 52428800 | | 5.723 | 0.1313 | 800 | 5.9116 | 0.1893 | 104857600 | | 5.5106 | 0.1969 | 1200 | 5.7516 | 0.1976 | 157286400 | | 5.455 | 0.2626 | 1600 | 5.6427 | 0.2032 | 209715200 | | 5.3236 | 0.3282 | 2000 | 5.5567 | 0.2103 | 262144000 | | 5.2764 | 0.3938 | 2400 | 5.4919 | 0.2151 | 314572800 | | 5.1625 | 0.4595 | 2800 | 5.4436 | 0.2176 | 367001600 | | 5.1851 | 0.5251 | 3200 | 5.3975 | 0.2206 | 419430400 | | 5.0618 | 0.5908 | 3600 | 5.3624 | 0.2199 | 471859200 | | 5.0278 | 0.6564 | 4000 | 5.3242 | 0.2236 | 524288000 | | 5.0389 | 0.7220 | 4400 | 5.2920 | 0.2264 | 576716800 | | 4.9732 | 0.7877 | 4800 | 5.2674 | 0.2276 | 629145600 | | 4.9375 | 0.8533 | 5200 | 5.2418 | 0.2292 | 681574400 | | 4.9322 | 0.9190 | 5600 | 5.2166 | 0.2312 | 734003200 | | 4.8818 | 0.9846 | 6000 | 5.1981 | 0.2315 | 786432000 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"language": ["en"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["BEE-spoke-data/fineweb-1M_en-med"], "metrics": ["accuracy"], "base_model": "pszemraj/griffin-v0.01-c3t-8layer-simplewiki-silu"}
pszemraj/griffin-c3t-8L-v0.02-fineweb
null
[ "transformers", "safetensors", "recurrent_gemma", "text-generation", "generated_from_trainer", "en", "dataset:BEE-spoke-data/fineweb-1M_en-med", "base_model:pszemraj/griffin-v0.01-c3t-8layer-simplewiki-silu", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:10:50+00:00
text2text-generation
transformers
{}
mbastardi24/marian-finetuned-kde4-en-to-fr
null
[ "transformers", "tensorboard", "safetensors", "marian", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:11:35+00:00
null
null
{"license": "apache-2.0"}
mrhoneymoon/omtest
null
[ "license:apache-2.0", "region:us" ]
null
2024-04-26T16:11:37+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # NDD-phoenix_test-content_tags This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3591 - Accuracy: 0.8595 - F1: 0.8554 - Precision: 0.8702 - Recall: 0.8595 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.1629 | 0.9996 | 673 | 0.3110 | 0.8967 | 0.8965 | 0.8965 | 0.8967 | | 0.115 | 1.9993 | 1346 | 0.3591 | 0.8595 | 0.8554 | 0.8702 | 0.8595 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "NDD-phoenix_test-content_tags", "results": []}]}
lgk03/NDD-phoenix_test-content_tags
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:12:59+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
AlexWortega/llama3_patched
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:13:29+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-fb-housing-posts This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2008 - Accuracy: 0.9305 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 55 | 0.4092 | 0.8449 | | No log | 2.0 | 110 | 0.2328 | 0.9198 | | No log | 3.0 | 165 | 0.2008 | 0.9305 | ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "distilbert/distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-fb-housing-posts", "results": []}]}
hoaj/distilbert-base-uncased-fb-housing-posts
null
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:13:30+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
{"library_name": "peft", "base_model": "openlm-research/open_llama_3b_v2"}
yiyic/llama3b-text-entprop-lora-clf-epoch-0
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openlm-research/open_llama_3b_v2", "region:us" ]
null
2024-04-26T16:14:22+00:00
null
null
{}
ArtChicken/sohwx-woman-xl-realvisv4
null
[ "region:us" ]
null
2024-04-26T16:15:51+00:00
text-generation
transformers
# Keiana-L3-Test5.65-8B-12.5 Keiana-L3-Test5.65-8B-12.5 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): # Keep in mind that, this merged model isn't usually tested at the moment, which could benefit in vocabulary error. * [Kaoeiri/Keiana-L3-Test5.6-8B-12](https://huggingface.co/Kaoeiri/Keiana-L3-Test5.6-8B-12) * [Kaoeiri/Experimenting-Test4.5-8B-2](https://huggingface.co/Kaoeiri/Experimenting-Test4.5-8B-2) ## 🧩 Configuration ```yaml merge_method: task_arithmetic dtype: float16 base_model: Kaoeiri/Keiana-L3-Test5.2-8B-8 models: - model: Kaoeiri/Keiana-L3-Test5.6-8B-12 parameters: weight: 1 - model: Kaoeiri/Experimenting-Test4.5-8B-2 parameters: weight: .1 density: .2 parameters: normalize: true int8_mask: true ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "Kaoeiri/Keiana-L3-Test5.65-8B-12.5" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"tags": ["merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test5.6-8B-12", "Kaoeiri/Experimenting-Test4.5-8B-2"], "base_model": ["Kaoeiri/Keiana-L3-Test5.6-8B-12", "Kaoeiri/Experimenting-Test4.5-8B-2"]}
Kaoeiri/Keiana-L3-Test5.65-8B-12.5
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "mergekit", "lazymergekit", "Kaoeiri/Keiana-L3-Test5.6-8B-12", "Kaoeiri/Experimenting-Test4.5-8B-2", "conversational", "base_model:Kaoeiri/Keiana-L3-Test5.6-8B-12", "base_model:Kaoeiri/Experimenting-Test4.5-8B-2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:15:53+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
fxmeng/PiSSA-Llama-2-7B-r16-4bit-5iter
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T16:16:50+00:00
text-generation
transformers
This model is randomly initialized, using the config from [databricks/dbrx-instruct](https://huggingface.co/databricks/dbrx-instruct) but with hidden dimension = 256. Note the model is in float16. Codes: ```python import transformers import torch import os from huggingface_hub import create_repo, upload_folder source_model_id = 'databricks/dbrx-instruct' save_path = '/tmp/yujiepan/dbrx-tiny256-random' repo_id = 'yujiepan/dbrx-tiny256-random' config = transformers.AutoConfig.from_pretrained( source_model_id, trust_remote_code=True) config.attn_config.kv_n_heads = 4 config.d_model = 256 config.ffn_config.ffn_hidden_size = 512 config.n_heads = 8 config.n_layers = 2 model = transformers.AutoModelForCausalLM.from_config( config, trust_remote_code=True) model = model.half() model.save_pretrained(save_path) tokenizer = transformers.AutoTokenizer.from_pretrained( source_model_id, trust_remote_code=True) tokenizer.save_pretrained(save_path) result = transformers.pipelines.pipeline( 'text-generation', model=model.float(), tokenizer=tokenizer)('Hello') print(result) os.system(f'ls -alh {save_path}') create_repo(repo_id, exist_ok=True) upload_folder(repo_id=repo_id, folder_path=save_path) ```
{"library_name": "transformers", "pipeline_tag": "text-generation", "inference": true, "widget": [{"text": "Hello!", "example_title": "Hello world", "group": "Python"}]}
yujiepan/dbrx-tiny256-random
null
[ "transformers", "safetensors", "dbrx", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:17:36+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_eli5_clm-model_blenderbot_small This model is a fine-tuned version of [facebook/blenderbot_small-90M](https://huggingface.co/facebook/blenderbot_small-90M) on the eli5_category dataset. It achieves the following results on the evaluation set: - Loss: 0.0658 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.0661 | 0.5 | 22943 | 0.0658 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["eli5_category"], "base_model": "facebook/blenderbot_small-90M", "model-index": [{"name": "my_awesome_eli5_clm-model_blenderbot_small", "results": []}]}
kellyjiayixu/my_awesome_eli5_clm-model_blenderbot_small
null
[ "transformers", "tensorboard", "safetensors", "blenderbot-small", "text-generation", "generated_from_trainer", "dataset:eli5_category", "base_model:facebook/blenderbot_small-90M", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:17:43+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_imdb This model is a fine-tuned version of [textattack/albert-base-v2-imdb](https://huggingface.co/textattack/albert-base-v2-imdb) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3257 - Accuracy: 0.9398 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1749 | 1.0 | 1563 | 0.2265 | 0.9400 | | 0.0649 | 2.0 | 3126 | 0.3257 | 0.9398 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "textattack/albert-base-v2-imdb", "model-index": [{"name": "my_awesome_imdb", "results": []}]}
ljgries/my_awesome_imdb
null
[ "transformers", "tensorboard", "safetensors", "albert", "text-classification", "generated_from_trainer", "base_model:textattack/albert-base-v2-imdb", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:17:46+00:00
feature-extraction
null
{"language": ["en"], "license": "mit", "tags": ["code", "medical"], "metrics": ["accuracy"], "pipeline_tag": "feature-extraction"}
Zeal-Nir/dr-handwriting-keras-model
null
[ "code", "medical", "feature-extraction", "en", "license:mit", "region:us" ]
null
2024-04-26T16:18:40+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
presencesw/Vistral-7B-UIT-CLAIM_5
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:18:54+00:00
null
null
Trained from dataset v4, Lowercased, Normalized, cleaned and remove punctuation partially
{}
robzchhangte/Tokenizer-MZGPT2-uncased-cleaned
null
[ "region:us" ]
null
2024-04-26T16:19:02+00:00
text-generation
transformers
This model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. ## This is an experiment on fixing models with incorrect behaviors. This experiment serves to test and refine a specific training and evaluation pipeline research framework. Its primary objective is to identify potential optimizations, with a focus on data engineering, architectural efficiency, and evaluation performance. The goal of this experiment is to evaluate the effectiveness of a new training and evaluation pipeline for Large Language Models (LLMs). To achieve this, we will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement. ## Quantized version (GGUF) [Mistroll-7B-v2.2-Q8_0](https://huggingface.co/BarraHome/Mistroll-7B-v2.2/blob/main/Mistroll-7B-v2.2-Q8_0.gguf) Thank Yam for your incredible experiment & the Unsloth Community! PS: Numero uno brothers! ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b6afa756f1af7b46f1b513/oLTOey4qWj6-Nk_G3Qadi.png)
{"language": ["en", "es"], "license": "mit", "library_name": "transformers", "tags": ["unsloth", "gguf", "safetensors"], "pipeline_tag": "text-generation", "base_model": "yam-peleg/Experiment26-7B"}
BarraHome/Mistroll-7B-v2.2
null
[ "transformers", "safetensors", "gguf", "mistral", "text-generation", "unsloth", "conversational", "en", "es", "base_model:yam-peleg/Experiment26-7B", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2024-04-26T16:19:38+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sft-dynamoguard This model is a fine-tuned version of [dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978](https://huggingface.co/dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978) on the dynamofl/toxicity-70k-april26 dataset. It achieves the following results on the evaluation set: - Loss: 0.1604 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 6 - gradient_accumulation_steps: 16 - total_train_batch_size: 768 - total_eval_batch_size: 24 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:----:|:---------------:| | 0.705 | 0.9711 | 21 | 0.5762 | | 0.2553 | 1.9884 | 43 | 0.2231 | | 0.2009 | 2.9595 | 64 | 0.1939 | | 0.1822 | 3.9769 | 86 | 0.1838 | | 0.1778 | 4.9942 | 108 | 0.1780 | | 0.1735 | 5.9653 | 129 | 0.1740 | | 0.1686 | 6.9827 | 151 | 0.1709 | | 0.1662 | 8.0 | 173 | 0.1685 | | 0.1632 | 8.9711 | 194 | 0.1666 | | 0.1608 | 9.9884 | 216 | 0.1650 | | 0.1588 | 10.9595 | 237 | 0.1637 | | 0.1597 | 11.9769 | 259 | 0.1627 | | 0.1571 | 12.9942 | 281 | 0.1620 | | 0.1568 | 13.9653 | 302 | 0.1614 | | 0.1553 | 14.9827 | 324 | 0.1610 | | 0.1536 | 16.0 | 346 | 0.1607 | | 0.1544 | 16.9711 | 367 | 0.1605 | | 0.1553 | 17.9884 | 389 | 0.1605 | | 0.1554 | 18.9595 | 410 | 0.1604 | | 0.156 | 19.4220 | 420 | 0.1604 | ### Framework versions - Transformers 4.40.1 - Pytorch 2.3.0+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "alignment-handbook", "generated_from_trainer"], "datasets": ["dynamofl/toxicity-70k-april26"], "base_model": "dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978", "model-index": [{"name": "sft-dynamoguard", "results": []}]}
jamesoneill12/sft-dynamoguard
null
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "alignment-handbook", "trl", "sft", "generated_from_trainer", "conversational", "dataset:dynamofl/toxicity-70k-april26", "base_model:dynamofl/dynamo-1.6B-v0.4-mosaic-dynamoDPO-iter0-2978", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:22:58+00:00
text-generation
transformers
# Model Trained Using AutoTrain This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain). # Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "PATH_TO_THIS_REPO" tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype='auto' ).eval() # Prompt content: "hi" messages = [ {"role": "user", "content": "hi"} ] input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt') output_ids = model.generate(input_ids.to('cuda')) response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True) # Model response: "Hello! How can I assist you today?" print(response) ```
{"license": "other", "library_name": "transformers", "tags": ["autotrain", "text-generation-inference", "text-generation", "peft"], "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
jiminHuang/v5DSd6AWRBWWx68s
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain", "text-generation-inference", "peft", "conversational", "license:other", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:23:03+00:00
null
null
{}
pollitoconpapass/handwriting.keras
null
[ "region:us" ]
null
2024-04-26T16:23:21+00:00
text-generation
transformers
{}
simonycl/self-seq-Llama-2-7b-hf-new-without-a
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:23:43+00:00
null
null
{}
Jitender191/demo-onnx
null
[ "region:us" ]
null
2024-04-26T16:23:48+00:00
text-classification
transformers
{}
etadevosyan/allergy_test
null
[ "transformers", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:24:04+00:00
text-generation
transformers
{}
simonycl/self-seq-Llama-2-7b-hf-tulu
null
[ "transformers", "pytorch", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:24:14+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
DBangshu/Gemma2_var_aware
null
[ "transformers", "safetensors", "gemma", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:24:26+00:00
null
peft
{"language": ["en"], "license": "afl-3.0", "library_name": "peft", "tags": ["biology"]}
MAJJED/llm1
null
[ "peft", "biology", "en", "license:afl-3.0", "region:us" ]
null
2024-04-26T16:24:26+00:00
null
null
{"license": "mit"}
TIGER-Lab/MAmmoTH-2-8x7B
null
[ "license:mit", "region:us" ]
null
2024-04-26T16:25:28+00:00
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
AmrMohamed/Mistral-7B-Instruct-v0.2-grading
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:26:07+00:00
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # platzi-vit-model-arturo-guerrero This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 9.5298 - Accuracy: 0.0827 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2772 | 0.82 | 500 | 7.9073 | 0.0655 | | 0.1109 | 1.64 | 1000 | 8.2218 | 0.0799 | | 0.0453 | 2.46 | 1500 | 8.6023 | 0.0863 | | 0.0295 | 3.28 | 2000 | 8.6855 | 0.0806 | | 0.0172 | 4.11 | 2500 | 8.8657 | 0.0820 | | 0.0071 | 4.93 | 3000 | 9.1984 | 0.0827 | | 0.0048 | 5.75 | 3500 | 9.3593 | 0.0806 | | 0.0029 | 6.57 | 4000 | 9.4728 | 0.0820 | | 0.0045 | 7.39 | 4500 | 9.5298 | 0.0827 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "platzi-vit-model-arturo-guerrero", "results": []}]}
arturoxdev/platzi-vit-model-arturo-guerrero
null
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:26:40+00:00
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Edinburgh-StatMT/mistral-7b-stage2-v1.abcd.loraemb
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:27:06+00:00
null
transformers
{"license": "mit"}
benchen1/llama3-8b-gguf
null
[ "transformers", "gguf", "llama", "license:mit", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:27:31+00:00
null
null
{}
suthawadee/thai_receiptA
null
[ "region:us" ]
null
2024-04-26T16:28:23+00:00
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224-in22k_fold4 This model is a fine-tuned version of [microsoft/swin-large-patch4-window7-224-in22k](https://huggingface.co/microsoft/swin-large-patch4-window7-224-in22k) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.1431 - Accuracy: 0.6724 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1565 | 1.0 | 924 | 1.1090 | 0.6212 | | 0.952 | 2.0 | 1848 | 0.9958 | 0.6527 | | 0.7659 | 3.0 | 2772 | 0.9710 | 0.6689 | | 0.7157 | 4.0 | 3696 | 0.9729 | 0.6716 | | 0.6176 | 5.0 | 4620 | 0.9826 | 0.6741 | | 0.4164 | 6.0 | 5544 | 1.0267 | 0.6781 | | 0.4123 | 7.0 | 6468 | 1.0495 | 0.6792 | | 0.3897 | 8.0 | 7392 | 1.1038 | 0.6722 | | 0.3475 | 9.0 | 8316 | 1.1150 | 0.6771 | | 0.2993 | 10.0 | 9240 | 1.1431 | 0.6724 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["imagefolder"], "metrics": ["accuracy"], "base_model": "microsoft/swin-large-patch4-window7-224-in22k", "model-index": [{"name": "Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224-in22k_fold4", "results": [{"task": {"type": "image-classification", "name": "Image Classification"}, "dataset": {"name": "imagefolder", "type": "imagefolder", "config": "default", "split": "test", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.6724464914657274, "name": "Accuracy"}]}]}]}
onizukal/Boya1_RMSProp_1-e5_10Epoch_swin-large-patch4-window7-224-in22k_fold4
null
[ "transformers", "safetensors", "swin", "image-classification", "generated_from_trainer", "dataset:imagefolder", "base_model:microsoft/swin-large-patch4-window7-224-in22k", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:29:18+00:00
summarization
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-lora-summarization-medical This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0521 - Rouge1: 0.4304 - Rouge2: 0.2352 - Rougel: 0.3663 - Rougelsum: 0.366 - Gen Len: 18.1767 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 2.5079 | 1.0 | 6250 | 2.1220 | 0.4263 | 0.229 | 0.3597 | 0.3594 | 18.33 | | 2.4566 | 2.0 | 12500 | 2.0844 | 0.4267 | 0.2312 | 0.3622 | 0.3618 | 18.2773 | | 2.4242 | 3.0 | 18750 | 2.0616 | 0.4311 | 0.2358 | 0.366 | 0.3656 | 18.1307 | | 2.4058 | 4.0 | 25000 | 2.0532 | 0.4316 | 0.2367 | 0.366 | 0.3659 | 18.1753 | | 2.4119 | 5.0 | 31250 | 2.0521 | 0.4304 | 0.2352 | 0.3663 | 0.366 | 18.1767 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer", "medical"], "datasets": ["mystic-leung/medical_cord19"], "metrics": ["rouge"], "base_model": "facebook/bart-base", "pipeline_tag": "summarization", "model-index": [{"name": "bart-base-lora-summarization-medical", "results": []}]}
zbigi/bart-base-lora-summarization-medical
null
[ "peft", "tensorboard", "safetensors", "generated_from_trainer", "medical", "summarization", "dataset:mystic-leung/medical_cord19", "base_model:facebook/bart-base", "license:apache-2.0", "region:us" ]
null
2024-04-26T16:30:25+00:00
text-classification
transformers
{}
gamallo/bert-base-gl-cased-finetuned-imdb
null
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:31:39+00:00
text-generation
transformers
<div align="center"> <img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/2FhDh8NDvMl7iSxbQz9BP.png) <div align="center"> <h1>Advancing Open-source Large Language Models in Medical Domain</h1> </div> <p align="center" style="margin-top: 0px;"> <a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">Online Demo</span> </a> | <a href="https://github.com/openlifescience-ai"> <img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style=" margin-right: 5px;">GitHub</span> </a> | <a href="#"> <img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text" style="margin-right: 5px;">Paper</span> </a> | <a href="https://discord.gg/A5Fjf5zC69"> <img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/> <span class="link-text">Discord</span> </a> </p> ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/KGmRE5w2sepNtwsEu8t7K.jpeg) Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks. 🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency. 🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks. 🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include: <div align="center"> <img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png"> </div> - **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290) - **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar) - **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated) This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications. ⚙️ **Release Details**: - **Model Size**: 8 billion parameters - **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-8B-GGUF) - **Language(s) (NLP):** en - **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs - **License:** Meta-Llama License - **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) - **Resources for more information:** - Paper: Coming soon The model can be fine-tuned for more specialized tasks and datasets as needed. OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences. We are excited to share OpenBioLLM-8B with researchers and developers around the world. ### Use with transformers **Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.** See the snippet below for usage with Transformers: ```python import transformers import torch model_id = "aaditya/OpenBioLLM-Llama3-8B" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device="auto", ) messages = [ {"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."}, {"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.0, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` ## **Training procedure** ### **Training hyperparameters** <details> <summary>Click to see details</summary> - learning_rate: 0.0002 - lr_scheduler: cosine - train_batch_size: 12 - eval_batch_size: 8 - GPU: H100 80GB SXM5 - num_devices: 1 - optimizer: adamw_bnb_8bit - lr_scheduler_warmup_steps: 100 - num_epochs: 4 </details> ### **Peft hyperparameters** <details> <summary>Click to see details</summary> - adapter: qlora - lora_r: 128 - lora_alpha: 256 - lora_dropout: 0.05 - lora_target_linear: true -lora_target_modules: - q_proj - v_proj - k_proj - o_proj - gate_proj - down_proj - up_proj </details> ### **Training results** ### **Framework versions** - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.1 - Axolotl - Lm harness for evaluation # Benchmark Results 🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge. 🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy. | | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg | |--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------| | **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** | | Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 | | **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 | | Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 | | **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 | | Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 | | GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 | | Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 | | gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 | | Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 | | Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 | | MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 | | BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 | | AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 | | ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 | <div align="center"> <img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png"> </div> ## Detailed Medical Subjectwise accuracy ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/UXF-V0col0Z0sS6BGPBkE.png) # Use Cases & Examples 🚨 **Below results are from the quantized version of OpenBioLLM-70B** # Summarize Clinical Notes OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/xdwdBgOxNi_TfML0hKlI8.png) # Answer Medical Questions OpenBioLLM-70B can provide answers to a wide range of medical questions. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/zO95GlwOQEZqCKQF69mE6.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/OKBczKw7gWeW5xsuDpc27.png) <details> <summary>Click to see details</summary> ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/eJGHT5khppYvJb8fQ-YW4.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Cnbwrqa_-ORHRuNRC2P6Y.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/J9DhdcvukAc9mnnW9fj2C.png) </details> # Clinical Entity Recognition OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_69BW4k9LVABFwtxixL45.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/DKy5wYCoPhoPPUc1-x8_J.png) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/7WD9zCCBZT4-4XlfnIQjl.png) # Biomarkers Extraction ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/ZttoM4AiteT7gFYVhjIpN.png) # Classification OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/Bf5MW1d75qT-1F_TR_hC0.png) # De-Identification OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/hKX4kzm--Tw5bj6K78msy.png) **Advisory Notice!**  While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments. Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations. OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor. Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including: - Thorough testing and evaluation in relevant clinical scenarios - Alignment with evidence-based guidelines and best practices - Mitigation of potential biases and failure modes - Integration with human oversight and interpretation - Compliance with regulatory and ethical standards Always consult a qualified healthcare provider for personal medical needs. # Citation If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows: ``` @misc{OpenBioLLMs, author = {Ankit Pal, Malaikannan Sankarasubbu}, title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences}, year = {2024}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}} } ``` The accompanying paper is currently in progress and will be released soon. <div align="center"> <h2> 💌 Contact </h2> </div> We look forward to hearing you and collaborating on this exciting project! **Contributors:** - [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com] - Saama AI Labs - Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset. # References We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models! Result sources - [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375) - [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138) - [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617) - [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023)
{"language": ["en"], "license": "llama3", "tags": ["llama-3", "llama", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation"], "base_model": "meta-llama/Meta-Llama-3-8B", "widget": [{"example_title": "OpenBioLLM-8B", "messages": [{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience."}, {"role": "user", "content": "How long does it take for newborn jaundice to go away?"}], "output": {"text": "Newborn jaundice, also known as neonatal jaundice, is a common condition in newborns where the yellowing of the skin and eyes occurs due to an elevated level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when red blood cells break down. In most cases, newborn jaundice resolves on its own without any specific treatment.\nThe duration of newborn jaundice can vary depending on several factors such as the underlying cause, gestational age at birth, and individual variations in bilirubin metabolism. Here are some general guidelines:\n1. Physiological jaundice: This is the most common type of newborn jaundice and usually appears within 24-72 hours after birth. It tends to peak between the second and fifth day of life and gradually improves over the next week or two. By the time the baby is one week old, the jaundice should have mostly resolved. 2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and may appear later than physiological jaundice, typically between the fifth and fourteenth day of life. It tends to persist for a longer duration but usually resolves within six weeks after birth. 3. Pathological jaundice: This type of jaundice is less common and occurs due to an underlying medical condition that affects bilirubin metabolism or liver function. The duration of pathological jaundice depends on the specific cause and may require treatment.\nIt's important for parents to monitor their newborn's jaundice closely and seek medical advice if the jaundice progresses rapidly, becomes severe, or is accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness. In these cases, further evaluation and management may be necessary. Remember that each baby is unique, and the timing of jaundice resolution can vary. If you have concerns about your newborn's jaundice, it's always best to consult with a healthcare professional for personalized advice and guidance."}}], "model-index": [{"name": "OpenBioLLM-8B", "results": []}]}
matteocap/OpenBioLLM-Llama3-8B_safetensors
null
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "llama-3", "Mixtral", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "distillation", "en", "arxiv:2305.18290", "arxiv:2303.13375", "arxiv:2212.13138", "arxiv:2305.09617", "arxiv:2402.07023", "base_model:meta-llama/Meta-Llama-3-8B", "license:llama3", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:32:43+00:00
null
null
{}
deathese/dazai_osamu_dub
null
[ "region:us" ]
null
2024-04-26T16:33:12+00:00
text-generation
transformers
{}
skuma307/Llama-3-8B-AWQ-4bit
null
[ "transformers", "safetensors", "llama", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T16:34:09+00:00
text-generation
transformers
This model is randomly initialized, using the config from [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) but with smaller size. Note the model is in float16. Codes: ```python import transformers import torch import os from huggingface_hub import create_repo, upload_folder source_model_id = 'microsoft/Phi-3-mini-128k-instruct' save_path = '/tmp/yujiepan/phi-3-tiny-random' repo_id = 'yujiepan/phi-3-tiny-random' config = transformers.AutoConfig.from_pretrained( source_model_id, trust_remote_code=True) config.hidden_size = 16 config.intermediate_size = 32 config.num_attention_heads = 4 config.num_hidden_layers = 2 config.num_key_value_heads = 4 config.rope_scaling['long_factor'] = [1.0299, 1.0499] config.rope_scaling['short_factor'] = [1.05, 1.05] model = transformers.AutoModelForCausalLM.from_config( config, trust_remote_code=True) model = model.to(torch.float16) model.save_pretrained(save_path) tokenizer = transformers.AutoTokenizer.from_pretrained( source_model_id, trust_remote_code=True) tokenizer.save_pretrained(save_path) result = transformers.pipelines.pipeline( 'text-generation', model=model.float(), tokenizer=tokenizer)('Hello') print(result) os.system(f'ls -alh {save_path}') create_repo(repo_id, exist_ok=True) upload_folder(repo_id=repo_id, folder_path=save_path) ```
{"library_name": "transformers", "pipeline_tag": "text-generation", "inference": true, "widget": [{"text": "Hello!", "example_title": "Hello world", "group": "Python"}]}
yujiepan/phi-3-tiny-random
null
[ "transformers", "safetensors", "phi3", "text-generation", "conversational", "custom_code", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:34:32+00:00
text-generation
transformers
{}
sanchit-gandhi/distil-zephyr-1.5b-dpo-ultrafeedback-200k
null
[ "transformers", "tensorboard", "mistral", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:34:39+00:00
text-generation
transformers
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. ## Results ![image info](./plots.png) **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with awq. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***How is the model efficiency evaluated?*** These results were obtained on NVIDIA A100-PCIE-40GB with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you. - ***What is the model format?*** We use safetensors. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads. - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases. ## Setup You can run the smashed model with these steps: 0. Check requirements from the original repo meta-llama/Meta-Llama-3-8B-Instruct installed. In particular, check python, cuda, and transformers versions. 1. Make sure that you have installed quantization related packages. ```bash pip install autoawq ``` 2. Load & run the model. ```python from transformers import AutoModelForCausalLM, AutoTokenizer from awq import AutoAWQForCausalLM model = AutoAWQForCausalLM.from_quantized("PrunaAI/meta-llama-Meta-Llama-3-8B-Instruct-AWQ-4bit-smashed", trust_remote_code=True, device_map='auto') tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct") input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"] outputs = model.generate(input_ids, max_new_tokens=216) tokenizer.decode(outputs[0]) ``` ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model meta-llama/Meta-Llama-3-8B-Instruct before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg", "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"}
PrunaAI/meta-llama-Meta-Llama-3-8B-Instruct-AWQ-4bit-smashed
null
[ "transformers", "safetensors", "llama", "text-generation", "pruna-ai", "conversational", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "4-bit", "region:us" ]
null
2024-04-26T16:34:52+00:00
null
transformers
{}
mPLUG/TinyChart-3B-768-siglip
null
[ "transformers", "pytorch", "siglip_vision_model", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:38:21+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # group3_non_all_zero This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0497 - Precision: 0.0638 - Recall: 0.2421 - F1: 0.1009 - Accuracy: 0.9339 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 55 | 1.1877 | 0.0140 | 0.25 | 0.0265 | 0.7339 | | No log | 2.0 | 110 | 0.9789 | 0.0219 | 0.2041 | 0.0395 | 0.8081 | | No log | 3.0 | 165 | 1.0274 | 0.0385 | 0.2437 | 0.0665 | 0.8703 | | No log | 4.0 | 220 | 1.1138 | 0.0225 | 0.1820 | 0.0401 | 0.8343 | | No log | 5.0 | 275 | 1.1690 | 0.0335 | 0.2184 | 0.0581 | 0.8702 | | No log | 6.0 | 330 | 1.3425 | 0.0421 | 0.2310 | 0.0712 | 0.8972 | | No log | 7.0 | 385 | 1.5089 | 0.0445 | 0.2342 | 0.0748 | 0.9079 | | No log | 8.0 | 440 | 1.5614 | 0.0466 | 0.2453 | 0.0783 | 0.9119 | | No log | 9.0 | 495 | 1.7200 | 0.0534 | 0.2453 | 0.0876 | 0.9220 | | 0.5787 | 10.0 | 550 | 1.7086 | 0.0447 | 0.2453 | 0.0756 | 0.9098 | | 0.5787 | 11.0 | 605 | 1.8784 | 0.0553 | 0.2342 | 0.0895 | 0.9263 | | 0.5787 | 12.0 | 660 | 1.9659 | 0.0589 | 0.2421 | 0.0947 | 0.9299 | | 0.5787 | 13.0 | 715 | 1.9472 | 0.0600 | 0.2437 | 0.0963 | 0.9297 | | 0.5787 | 14.0 | 770 | 2.0058 | 0.0605 | 0.2373 | 0.0964 | 0.9310 | | 0.5787 | 15.0 | 825 | 2.0497 | 0.0638 | 0.2421 | 0.1009 | 0.9339 | ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.2+cu121 - Datasets 2.19.0 - Tokenizers 0.13.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "group3_non_all_zero", "results": []}]}
anismahmahi/group3_non_all_zero
null
[ "transformers", "pytorch", "tensorboard", "deberta-v2", "token-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:39:50+00:00
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
martinsinnona/visdecode_2024_6
null
[ "transformers", "safetensors", "pix2struct", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-26T16:40:44+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-4 This model is a fine-tuned version of [EleutherAI/pythia-1b](https://huggingface.co/EleutherAI/pythia-1b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-1b", "model-index": [{"name": "robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-4", "results": []}]}
AlignmentResearch/robust_llm_pythia-1b_mz-130_IMDB_n-its-10-seed-4
null
[ "transformers", "tensorboard", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-1b", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:42:33+00:00
null
fastai
# Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
{"tags": ["fastai"]}
joberiai/onepiece
null
[ "fastai", "region:us", "has_space" ]
null
2024-04-26T16:43:13+00:00
text-generation
transformers
# Aloe: A New Family of Healthcare LLMs Aloe is a new family of healthcare LLMs that is highly competitive with all previous open models of its range and reaches state-of-the-art results at its size by using model merging and advanced prompting strategies. Aloe scores high in metrics measuring ethics and factuality, thanks to a combined red teaming and alignment effort. Complete training details, model merging configurations, and all training data (including synthetically generated data) will be shared. Additionally, the prompting repository used in this work to produce state-of-the-art results during inference will also be shared. Aloe comes with a healthcare-specific risk assessment to contribute to the safe use and deployment of such systems. <img src="https://cdn-uploads.huggingface.co/production/uploads/62f7a16192950415b637e201/HMD6WEoqqrAV8Ng_fAcnN.png" width="95%"> ## Model Details ### [](https://huggingface.co/templates/model-card-example#model-description)Model Description - **Developed by:** [HPAI](https://hpai.bsc.es/) - **Model type:** Causal decoder-only transformer language model - **Language(s) (NLP):** English (mainly) - **License:** This model is based on Meta Llama 3 8B and is governed by the [Meta Llama 3 License](https://llama.meta.com/llama3/license/). All our modifications are available with a [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. - **Finetuned from model :** [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) ### [](https://huggingface.co/templates/model-card-example#model-sources-optional)Model Sources [optional] - **Repository:** Coming Soon - **Paper:** Coming soon ## Model Performance Aloe has been tested on the most popular healthcare QA datasets, with and without medprompting inference technique. Results show competitive performance, even against bigger models. <img src="https://cdn-uploads.huggingface.co/production/uploads/62f7a16192950415b637e201/rQ4z-qXzKN44oAcFDbHi2.png" width="95%"> ## Uses ### Direct Use We encourage the use of Aloe for research purposes, as a stepping stone to build better foundational models for healthcare. ### Out-of-Scope Use These models are not to be used for clinical practice, medical diagnosis, or any other form of direct or indirect healthcare advice. Models are prone to error and can produce toxic content. The use of Aloe models for activities harmful for individuals, such as spam, fraud, or impersonation, is prohibited. ## Bias, Risks, and Limitations First let us consider Healthcare professional impersonation, a fraudulent behaviour which currently generates billions of dollars in [profit](https://www.justice.gov/opa/pr/justice-department-charges-dozens-12-billion-health-care-fraud). A model such as Aloe could be used to increase the efficacy of such deceiving activities, making them more widespread. The main preventive actions are public literacy on the unreliability of digitised information and the importance of medical registration, and legislation enforcing AI-generated content disclaimers. The second risk we consider is medical decision-making without professional supervision. While this is already an issue in modern societies (eg self-medication) a model such as Aloe, capable of producing high-quality conversational data, can facilitate self-delusion, particularly in the presence of sycophancy. By producing tailored responses, it can also be used to generate actionable answers. Public literacy on the dangers of self-diagnosis is one of the main defences, together with the introduction of disclaimers and warnings on the models' outputs. The last risk we consider is the access to information on dangerous substances or procedures. While the literature on sensitive content can already be found on different sources (eg libraries, internet, dark web), LLMs can centralize such access, making it nearly impossible to control the flow of such information. Model alignment can help in that regard, but so far the effects remain insufficient, as jailbreaking methods still overcome it. <img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/T6Jblpf1kmTkM04K716rM.png" width="95%"> ### Recommendations We avoid the use of all personal data in our training. Model safety cannot be guaranteed. Aloe can produce toxic content under the appropriate prompts. For these reasons, minors should not be left alone to interact with Aloe without supervision. ## How to Get Started with the Model Use the code below to get started with the model. You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples of both. #### Transformers pipeline ```python import transformers import torch model_id = "HPAI-BSC/Llama3-Aloe-8B-Alpha" pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", ) messages = [ {"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."}, {"role": "user", "content": "Hello."}, ] prompt = pipeline.tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True ) terminators = [ pipeline.tokenizer.eos_token_id, pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = pipeline( prompt, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) print(outputs[0]["generated_text"][len(prompt):]) ``` #### Transformers AutoModelForCausalLM ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "HPAI-BSC/Llama3-Aloe-8B-Alpha" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ {"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."}, {"role": "user", "content": "Hello"}, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|eot_id|>") ] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.6, top_p=0.9, ) response = outputs[0][input_ids.shape[-1]:] print(tokenizer.decode(response, skip_special_tokens=True)) ``` ## Training Details Supervised fine-tuning on top of Llama 3 8B using medical and general domain datasets, model merging using DARE-TIES process, two-stage DPO process for human preference alignment. More details coming soon. ### Training Data - Medical domain datasets, including synthetic data generated using Mixtral-8x7B and Genstruct - LDJnr/Capybara - hkust-nlp/deita-10k-v0 - jondurbin/airoboros-3.2 - argilla/dpo-mix-7k - nvidia/HelpSteer - Custom preference data with adversarial prompts generated from Anthropic Harmless, Chen et al., and original prompts ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data - [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa) - [MedMCQA](https://huggingface.co/datasets/medmcqa) - [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa) - [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu) - [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) - [CareQA](https://huggingface.co/datasets/HPAI-BSC/CareQA) #### Metrics - Accuracy: suite the evaluation of multiple-choice question-answering tasks. ### Results <img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/STlPSggXr9P9JeWAvmAsi.png" width="90%"> #### Summary To compare Aloe with the most competitive open models (both general purpose and healthcare-specific) we use popular healthcare datasets (PubMedQA, MedMCQA, MedQA and MMLU for six medical tasks only), together with the new and highly reliable CareQA. We produce the standard MultiMedQA score for reference, by computing the weighted average accuracy on all scores except CareQA. Additionally, we calculate the arithmetic mean across all datasets. The Medical MMLU is calculated by averaging the six medical subtasks: Anatomy, Clinical knowledge, College Biology, College medicine, Medical genetics, and Professional medicine. Benchmark results indicate the training conducted on Aloe has boosted its performance above Llama3-8B-Instruct. Llama3-Aloe-8B-Alpha outperforms larger models like Meditron 70B, and is close to larger base models, like Yi-34. For the former, this gain is consistent even when using SC-CoT, using their best-reported variant. All these results make Llama3-Aloe-8B-Alpha the best healthcare LLM of its size. With the help of prompting techniques the performance of Llama3-Aloe-8B-Alpha is significantly improved. Medprompting in particular provides a 7% increase in reported accuracy, after which Llama3-Aloe-8B-Alpha only lags behind the ten times bigger Llama-3-70B-Instruct. This improvement is mostly consistent across medical fields. Llama3-Aloe-8B-Alpha with medprompting beats the performance of Meditron 70B with their self reported 20 shot SC-CoT in MMLU med and is slightly worse in the other benchmarks. ## Environmental Impact - **Hardware Type:** 4xH100 - **Hours used:** 7,000 - **Hardware Provider:** Barcelona Supercomputing Center - **Compute Region:** Spain - **Carbon Emitted:** 439.25kg ## Model Card Contact mailto:[email protected]
{"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "tags": ["biology", "medical"], "datasets": ["argilla/dpo-mix-7k", "nvidia/HelpSteer", "jondurbin/airoboros-3.2", "hkust-nlp/deita-10k-v0", "LDJnr/Capybara", "HPAI-BSC/CareQA", "GBaker/MedQA-USMLE-4-options", "lukaemon/mmlu", "bigbio/pubmed_qa", "openlifescienceai/medmcqa", "bigbio/med_qa", "HPAI-BSC/better-safe-than-sorry"]}
HPAI-BSC/Llama3-Aloe-8B-Alpha
null
[ "transformers", "pytorch", "llama", "text-generation", "biology", "medical", "conversational", "en", "dataset:argilla/dpo-mix-7k", "dataset:nvidia/HelpSteer", "dataset:jondurbin/airoboros-3.2", "dataset:hkust-nlp/deita-10k-v0", "dataset:LDJnr/Capybara", "dataset:HPAI-BSC/CareQA", "dataset:GBaker/MedQA-USMLE-4-options", "dataset:lukaemon/mmlu", "dataset:bigbio/pubmed_qa", "dataset:openlifescienceai/medmcqa", "dataset:bigbio/med_qa", "dataset:HPAI-BSC/better-safe-than-sorry", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:43:36+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Meta-Llama-3-8B-Instruct_fictional_Japanese_v1 This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 36 ### Training results ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "other", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct", "model-index": [{"name": "Meta-Llama-3-8B-Instruct_fictional_Japanese_v1", "results": []}]}
yzhuang/Meta-Llama-3-8B-Instruct_fictional_Japanese_v1
null
[ "transformers", "tensorboard", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "dataset:generator", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:43:38+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
{"library_name": "peft", "base_model": "openlm-research/open_llama_3b_v2"}
yiyic/llama3b-text-entprop-lora-clf-epoch-1
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:openlm-research/open_llama_3b_v2", "region:us" ]
null
2024-04-26T16:43:40+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
{"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B"}
yiyic/llama-text-entprop-lora-clf-epoch-0
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-04-26T16:43:47+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
{"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B"}
yiyic/llama-text-prop-lora-clf-epoch-0
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-04-26T16:43:55+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.7.2.dev0
{"library_name": "peft", "base_model": "meta-llama/Meta-Llama-3-8B"}
yiyic/llama-text-ent-lora-clf-epoch-0
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Meta-Llama-3-8B", "region:us" ]
null
2024-04-26T16:44:04+00:00
text-generation
transformers
# ErisMaidFlame-7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [nbeerbower/MaidFlameSoup-7B](https://huggingface.co/nbeerbower/MaidFlameSoup-7B) * [ChaoticNeutrals/Eris_Remix_DPO_7B](https://huggingface.co/ChaoticNeutrals/Eris_Remix_DPO_7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: ChaoticNeutrals/Eris_Remix_DPO_7B layer_range: [0, 32] - model: nbeerbower/MaidFlameSoup-7B layer_range: [0, 32] merge_method: slerp base_model: ChaoticNeutrals/Eris_Remix_DPO_7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["nbeerbower/MaidFlameSoup-7B", "ChaoticNeutrals/Eris_Remix_DPO_7B"]}
n00854180t/ErisMaidFlame-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "base_model:nbeerbower/MaidFlameSoup-7B", "base_model:ChaoticNeutrals/Eris_Remix_DPO_7B", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:44:06+00:00
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-v0.1"}
vonewman/galsenai-meetup-adapter
null
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-04-26T16:44:44+00:00
text-generation
transformers
{}
north/mistral-7b-reference100k
null
[ "transformers", "safetensors", "mistral", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-26T16:44:45+00:00
null
null
{}
manishere/jindayis
null
[ "region:us" ]
null
2024-04-26T16:44:48+00:00
null
null
{}
ChristianAzinn/acge_text_embedding-gguf
null
[ "region:us" ]
null
2024-04-26T16:44:53+00:00