pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
reinforcement-learning
stable-baselines3
# **A2C** Agent playing **PandaReachDense-v3** This is a trained model of a **A2C** agent playing **PandaReachDense-v3** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
{"library_name": "stable-baselines3", "tags": ["PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "A2C", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaReachDense-v3", "type": "PandaReachDense-v3"}, "metrics": [{"type": "mean_reward", "value": "-0.18 +/- 0.09", "name": "mean_reward", "verified": false}]}]}]}
conlan/a2c-PandaReachDense-v3
null
[ "stable-baselines3", "PandaReachDense-v3", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-21T19:49:00+00:00
[]
[]
TAGS #stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# A2C Agent playing PandaReachDense-v3 This is a trained model of a A2C agent playing PandaReachDense-v3 using the stable-baselines3 library. ## Usage (with Stable-baselines3) TODO: Add your code
[ "# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
[ "TAGS\n#stable-baselines3 #PandaReachDense-v3 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# A2C Agent playing PandaReachDense-v3\nThis is a trained model of a A2C agent playing PandaReachDense-v3\nusing the stable-baselines3 library.", "## Usage (with Stable-baselines3)\nTODO: Add your code" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Aspik101/llamaft5
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T19:49:37+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.3 <Gallery /> ## Model description These are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of generating images for the [Critical Dream](https://github.com/cosmicBboy/critical-dream) project. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: True. Special VAE used for training: stabilityai/sdxl-vae. ## Trigger words You should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.3/tree/main) them in the Files & versions tab. ## Tracker run link https://wandb.ai/nielsbantilan/dreambooth-lora-sd-xl/runs/2vk9278x ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "prompt": "a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus\"", "widget": [{"text": "a picture of [dm-matt-mercer]", "output": {"url": "image_0.png"}}, {"text": "a picture of [dm-matt-mercer]", "output": {"url": "image_1.png"}}, {"text": "a picture of a dungeon master.", "output": {"url": "image_2.png"}}, {"text": "a picture of a dungeon master.", "output": {"url": "image_3.png"}}, {"text": "a picture of [critrole-fjord], a male half-orc warlock. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_4.png"}}, {"text": "a picture of [critrole-fjord], a male half-orc warlock. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_5.png"}}, {"text": "a picture of a male half-orc warlock", "output": {"url": "image_6.png"}}, {"text": "a picture of a male half-orc warlock", "output": {"url": "image_7.png"}}, {"text": "a picture of [critrole-beau], a female human monk. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_8.png"}}, {"text": "a picture of [critrole-beau], a female human monk. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_9.png"}}, {"text": "a picture of a female human monk", "output": {"url": "image_10.png"}}, {"text": "a picture of a female human monk", "output": {"url": "image_11.png"}}, {"text": "a picture of [critrole-caduceus], a male firbolg cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_12.png"}}, {"text": "a picture of [critrole-caduceus], a male firbolg cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_13.png"}}, {"text": "a picture of a male firbolg cleric", "output": {"url": "image_14.png"}}, {"text": "a picture of a male firbolg cleric", "output": {"url": "image_15.png"}}, {"text": "a picture of [critrole-caleb], a male human wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_16.png"}}, {"text": "a picture of [critrole-caleb], a male human wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_17.png"}}, {"text": "a picture of a male human wizard", "output": {"url": "image_18.png"}}, {"text": "a picture of a male human wizard", "output": {"url": "image_19.png"}}, {"text": "a picture of [critrole-jester], a female tiefling cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_20.png"}}, {"text": "a picture of [critrole-jester], a female tiefling cleric. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_21.png"}}, {"text": "a picture of a female tiefling cleric", "output": {"url": "image_22.png"}}, {"text": "a picture of a female tiefling cleric", "output": {"url": "image_23.png"}}, {"text": "a picture of [critrole-nott], a female goblin rogue. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_24.png"}}, {"text": "a picture of [critrole-nott], a female goblin rogue. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_25.png"}}, {"text": "a picture of a female goblin rogue", "output": {"url": "image_26.png"}}, {"text": "a picture of a female goblin rogue", "output": {"url": "image_27.png"}}, {"text": "a picture of [critrole-veth], a female halfling rogue/wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_28.png"}}, {"text": "a picture of [critrole-veth], a female halfling rogue/wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_29.png"}}, {"text": "a picture of a female halfling rogue/wizard", "output": {"url": "image_30.png"}}, {"text": "a picture of a female halfling rogue/wizard", "output": {"url": "image_31.png"}}, {"text": "a picture of [critrole-yasha], a female aasimar barbarian. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_32.png"}}, {"text": "a picture of [critrole-yasha], a female aasimar barbarian. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_33.png"}}, {"text": "a picture of a female aasimar barbarian", "output": {"url": "image_34.png"}}, {"text": "a picture of a female aasimar barbarian", "output": {"url": "image_35.png"}}, {"text": "a picture of [critrole-mollymauk], a male tiefling blood hunter. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_36.png"}}, {"text": "a picture of [critrole-mollymauk], a male tiefling blood hunter. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_37.png"}}, {"text": "a picture of a male tiefling blood hunter", "output": {"url": "image_38.png"}}, {"text": "a picture of a male tiefling blood hunter", "output": {"url": "image_39.png"}}, {"text": "a picture of [critrole-essek], a male drow wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_40.png"}}, {"text": "a picture of [critrole-essek], a male drow wizard. background is a forest. fantasy art style, high quality, highly detailed, sharp focus", "output": {"url": "image_41.png"}}, {"text": "a picture of a male drow wizard", "output": {"url": "image_42.png"}}, {"text": "a picture of a male drow wizard", "output": {"url": "image_43.png"}}]}
cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.3
null
[ "diffusers", "text-to-image", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-21T19:50:24+00:00
[]
[]
TAGS #diffusers #text-to-image #stable-diffusion-xl #stable-diffusion-xl-diffusers #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.3 <Gallery /> ## Model description These are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of generating images for the Critical Dream project. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: True. Special VAE used for training: stabilityai/sdxl-vae. ## Trigger words You should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus" to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Tracker run link URL ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.3\n\n<Gallery />", "## Model description\n\nThese are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of\ngenerating images for the Critical Dream\nproject.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: True.\n\nSpecial VAE used for training: stabilityai/sdxl-vae.", "## Trigger words\n\nYou should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus\" to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Tracker run link\n\nURL", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #text-to-image #stable-diffusion-xl #stable-diffusion-xl-diffusers #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# Critical Dream - cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.3\n\n<Gallery />", "## Model description\n\nThese are cosmicBboy/stable-diffusion-xl-base-1.0-lora-dreambooth-critdream-v0.6.3 LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0, for the purposes of\ngenerating images for the Critical Dream\nproject.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: True.\n\nSpecial VAE used for training: stabilityai/sdxl-vae.", "## Trigger words\n\nYou should use a picture of [dm-matt-mercer], a dungeon master. background is a forest. fantasy art style, high quality, highly detailed, sharp focus\" to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Tracker run link\n\nURL", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-v0.1"}
cgihlstorf/finetuned_Mistral7B_32_1_0.0003_sequential
null
[ "peft", "arxiv:1910.09700", "base_model:mistralai/Mistral-7B-v0.1", "region:us" ]
null
2024-04-21T19:53:54+00:00
[ "1910.09700" ]
[]
TAGS #peft #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
redscroll/OrpoLlama-3-8B
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T19:54:27+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<img src="./llama-3-merges.webp" alt="Goku 8x22B v0.1 Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Llama-3-16B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [MaziyarPanahi/Llama-3-11B](https://huggingface.co/MaziyarPanahi/Llama-3-11B)
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["mergekit", "merge", "facebook", "meta", "pytorch", "llama", "llama-3"], "model_name": "Llama-3-16B", "base_model": "MaziyarPanahi/Llama-3-11B", "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "inference": false, "model_creator": "MaziyarPanahi", "quantized_by": "MaziyarPanahi"}
MaziyarPanahi/Llama-3-16B
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "facebook", "meta", "pytorch", "llama-3", "en", "base_model:MaziyarPanahi/Llama-3-11B", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T19:54:32+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #facebook #meta #pytorch #llama-3 #en #base_model-MaziyarPanahi/Llama-3-11B #license-other #autotrain_compatible #text-generation-inference #region-us
<img src="./URL" alt="Goku 8x22B v0.1 Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Llama-3-16B This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * MaziyarPanahi/Llama-3-11B
[ "# Llama-3-16B\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* MaziyarPanahi/Llama-3-11B" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #facebook #meta #pytorch #llama-3 #en #base_model-MaziyarPanahi/Llama-3-11B #license-other #autotrain_compatible #text-generation-inference #region-us \n", "# Llama-3-16B\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* MaziyarPanahi/Llama-3-11B" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bachh This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5198 - Accuracy: 0.8654 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.8279 | 0.11 | 100 | 0.9659 | 0.6697 | | 0.7801 | 0.22 | 200 | 0.8758 | 0.6725 | | 0.7136 | 0.34 | 300 | 0.7788 | 0.7178 | | 0.7302 | 0.45 | 400 | 0.7041 | 0.7438 | | 0.6793 | 0.56 | 500 | 0.6882 | 0.7431 | | 0.5872 | 0.67 | 600 | 0.6480 | 0.7683 | | 0.5711 | 0.78 | 700 | 0.6490 | 0.7588 | | 0.6003 | 0.9 | 800 | 0.6116 | 0.7727 | | 0.4382 | 1.01 | 900 | 0.6016 | 0.7883 | | 0.4666 | 1.12 | 1000 | 0.5848 | 0.7803 | | 0.3944 | 1.23 | 1100 | 0.6039 | 0.7745 | | 0.3805 | 1.35 | 1200 | 0.4929 | 0.8221 | | 0.3795 | 1.46 | 1300 | 0.5391 | 0.8048 | | 0.3727 | 1.57 | 1400 | 0.5303 | 0.8149 | | 0.3658 | 1.68 | 1500 | 0.5471 | 0.8160 | | 0.3289 | 1.79 | 1600 | 0.5170 | 0.8184 | | 0.2832 | 1.91 | 1700 | 0.4795 | 0.8334 | | 0.2048 | 2.02 | 1800 | 0.4942 | 0.8300 | | 0.2085 | 2.13 | 1900 | 0.4743 | 0.8394 | | 0.1449 | 2.24 | 2000 | 0.4642 | 0.8469 | | 0.1662 | 2.35 | 2100 | 0.4669 | 0.8426 | | 0.1603 | 2.47 | 2200 | 0.4721 | 0.8452 | | 0.1079 | 2.58 | 2300 | 0.5236 | 0.8342 | | 0.1952 | 2.69 | 2400 | 0.4448 | 0.8493 | | 0.2091 | 2.8 | 2500 | 0.4959 | 0.8518 | | 0.166 | 2.91 | 2600 | 0.5036 | 0.8356 | | 0.055 | 3.03 | 2700 | 0.4854 | 0.8497 | | 0.0639 | 3.14 | 2800 | 0.4987 | 0.8605 | | 0.0421 | 3.25 | 2900 | 0.5258 | 0.8543 | | 0.0292 | 3.36 | 3000 | 0.5287 | 0.8601 | | 0.0136 | 3.48 | 3100 | 0.5250 | 0.8593 | | 0.0421 | 3.59 | 3200 | 0.5192 | 0.8647 | | 0.0307 | 3.7 | 3300 | 0.5251 | 0.8569 | | 0.0536 | 3.81 | 3400 | 0.5300 | 0.8611 | | 0.0269 | 3.92 | 3500 | 0.5198 | 0.8654 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "bachh", "results": []}]}
karoladelk/bachh
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "generated_from_trainer", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:00:59+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
bachh ===== This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.5198 * Accuracy: 0.8654 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 32 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 4 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #generated_from_trainer #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/Sao10K/L3-Solana-8B-v1 <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/L3-Solana-8B-v1-GGUF/resolve/main/L3-Solana-8B-v1.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/L3-Solana-8B-v1-GGUF/resolve/main/L3-Solana-8B-v1.IQ3_XS.gguf) | IQ3_XS | 3.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Solana-8B-v1-GGUF/resolve/main/L3-Solana-8B-v1.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Solana-8B-v1-GGUF/resolve/main/L3-Solana-8B-v1.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/L3-Solana-8B-v1-GGUF/resolve/main/L3-Solana-8B-v1.IQ3_M.gguf) | IQ3_M | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/L3-Solana-8B-v1-GGUF/resolve/main/L3-Solana-8B-v1.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/L3-Solana-8B-v1-GGUF/resolve/main/L3-Solana-8B-v1.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/L3-Solana-8B-v1-GGUF/resolve/main/L3-Solana-8B-v1.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/L3-Solana-8B-v1-GGUF/resolve/main/L3-Solana-8B-v1.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Solana-8B-v1-GGUF/resolve/main/L3-Solana-8B-v1.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/L3-Solana-8B-v1-GGUF/resolve/main/L3-Solana-8B-v1.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/L3-Solana-8B-v1-GGUF/resolve/main/L3-Solana-8B-v1.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/L3-Solana-8B-v1-GGUF/resolve/main/L3-Solana-8B-v1.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/L3-Solana-8B-v1-GGUF/resolve/main/L3-Solana-8B-v1.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "cc-by-nc-4.0", "library_name": "transformers", "base_model": "Sao10K/L3-Solana-8B-v1", "quantized_by": "mradermacher"}
mradermacher/L3-Solana-8B-v1-GGUF
null
[ "transformers", "gguf", "en", "base_model:Sao10K/L3-Solana-8B-v1", "license:cc-by-nc-4.0", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:02:09+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-Sao10K/L3-Solana-8B-v1 #license-cc-by-nc-4.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-Sao10K/L3-Solana-8B-v1 #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
baris-yazici/binary_liar_15epoch_stabilityai_stablelm-2-zephyr-1_6b_PROMPT_TUNING_CAUSAL_LM
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:02:48+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
is a merge of the following models * [Locutusque/llama-3-neural-chat-v1-8b](https://huggingface.co/Locutusque/llama-3-neural-chat-v1-8b) * [Locutusque/Llama-3-Orca-1.0-8B](https://huggingface.co/Locutusque/Llama-3-Orca-1.0-8B) * [DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental](https://huggingface.co/DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental) This was mostly a proof of concept test. GGUF 4k quants here: [cstr/llama3-discolm-orca-GGUF](https://huggingface.co/Locutusque/cstr/llama3-discolm-orca-GGUF)
{"language": ["de", "en"], "license": "other"}
cstr/llama3-discolm-orca-GGUF
null
[ "gguf", "de", "en", "license:other", "region:us" ]
null
2024-04-21T20:03:14+00:00
[]
[ "de", "en" ]
TAGS #gguf #de #en #license-other #region-us
is a merge of the following models * Locutusque/llama-3-neural-chat-v1-8b * Locutusque/Llama-3-Orca-1.0-8B * DiscoResearch/Llama3_DiscoLM_German_8b_v0.1_experimental This was mostly a proof of concept test. GGUF 4k quants here: cstr/llama3-discolm-orca-GGUF
[]
[ "TAGS\n#gguf #de #en #license-other #region-us \n" ]
null
transformers
# NikolayKozloff/Azzurro-Q6_K-GGUF This model was converted to GGUF format from [`MoxoffSpA/Azzurro`](https://huggingface.co/MoxoffSpA/Azzurro) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MoxoffSpA/Azzurro) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo NikolayKozloff/Azzurro-Q6_K-GGUF --model azzurro.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo NikolayKozloff/Azzurro-Q6_K-GGUF --model azzurro.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m azzurro.Q6_K.gguf -n 128 ```
{"language": ["it", "en"], "license": "mit", "library_name": "transformers", "tags": ["sft", "it", "mistral", "chatml", "llama-cpp", "gguf-my-repo"]}
NikolayKozloff/Azzurro-Q6_K-GGUF
null
[ "transformers", "gguf", "sft", "it", "mistral", "chatml", "llama-cpp", "gguf-my-repo", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:03:59+00:00
[]
[ "it", "en" ]
TAGS #transformers #gguf #sft #it #mistral #chatml #llama-cpp #gguf-my-repo #en #license-mit #endpoints_compatible #region-us
# NikolayKozloff/Azzurro-Q6_K-GGUF This model was converted to GGUF format from 'MoxoffSpA/Azzurro' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# NikolayKozloff/Azzurro-Q6_K-GGUF\nThis model was converted to GGUF format from 'MoxoffSpA/Azzurro' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #sft #it #mistral #chatml #llama-cpp #gguf-my-repo #en #license-mit #endpoints_compatible #region-us \n", "# NikolayKozloff/Azzurro-Q6_K-GGUF\nThis model was converted to GGUF format from 'MoxoffSpA/Azzurro' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_gen_n200 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_gen_n200", "results": []}]}
th135/meditron-7b_gen_n200
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:04:42+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_gen_n200 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_gen_n200\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_gen_n200\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_med_n200 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_med_n200", "results": []}]}
th135/meditron-7b_med_n200
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:05:06+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_med_n200 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_med_n200\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_med_n200\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_both_n200 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_both_n200", "results": []}]}
th135/medalpaca-7b_both_n200
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:05:06+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_both_n200 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_both_n200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_both_n200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_both_n200 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_both_n200", "results": []}]}
th135/meditron-7b_both_n200
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:05:19+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_both_n200 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_both_n200\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_both_n200\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_gen_n400 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_gen_n400", "results": []}]}
th135/meditron-7b_gen_n400
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:05:20+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_gen_n400 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_gen_n400\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_gen_n400\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_gen_n600 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_gen_n600", "results": []}]}
th135/meditron-7b_gen_n600
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:05:39+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_gen_n600 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_gen_n600\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_gen_n600\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_gen_n200 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_gen_n200", "results": []}]}
th135/medalpaca-7b_gen_n200
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:05:39+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_gen_n200 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_gen_n200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_gen_n200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_both_n400 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_both_n400", "results": []}]}
th135/meditron-7b_both_n400
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:05:42+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_both_n400 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_both_n400\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_both_n400\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_gen_n400 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_gen_n400", "results": []}]}
th135/medalpaca-7b_gen_n400
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:05:46+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_gen_n400 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_gen_n400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_gen_n400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_both_n600 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_both_n600", "results": []}]}
th135/meditron-7b_both_n600
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:06:00+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_both_n600 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_both_n600\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_both_n600\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_med_n400 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_med_n400", "results": []}]}
th135/meditron-7b_med_n400
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:06:02+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_med_n400 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_med_n400\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_med_n400\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_med_n400 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_med_n400", "results": []}]}
th135/medalpaca-7b_med_n400
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:06:04+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_med_n400 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_med_n400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_med_n400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_gen_n600 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_gen_n600", "results": []}]}
th135/medalpaca-7b_gen_n600
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:06:15+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_gen_n600 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_gen_n600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_gen_n600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_med_n600 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_med_n600", "results": []}]}
th135/medalpaca-7b_med_n600
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:06:28+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_med_n600 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_med_n600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_med_n600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_both_n800 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_both_n800", "results": []}]}
th135/meditron-7b_both_n800
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:06:29+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_both_n800 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_both_n800\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_both_n800\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_med_n800 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_med_n800", "results": []}]}
th135/medalpaca-7b_med_n800
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:06:29+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_med_n800 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_med_n800\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_med_n800\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_med_n900 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_med_n900", "results": []}]}
th135/meditron-7b_med_n900
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:06:37+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_med_n900 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_med_n900\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_med_n900\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_gen_n800 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_gen_n800", "results": []}]}
th135/meditron-7b_gen_n800
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:06:44+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_gen_n800 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_gen_n800\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_gen_n800\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_gen_n800 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_gen_n800", "results": []}]}
th135/medalpaca-7b_gen_n800
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:06:47+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_gen_n800 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_gen_n800\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_gen_n800\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_both_n1000 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_both_n1000", "results": []}]}
th135/meditron-7b_both_n1000
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:06:47+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_both_n1000 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_both_n1000\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_both_n1000\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_both_n400 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_both_n400", "results": []}]}
th135/medalpaca-7b_both_n400
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:06:48+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_both_n400 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_both_n400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_both_n400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_med_n600 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_med_n600", "results": []}]}
th135/meditron-7b_med_n600
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:06:49+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_med_n600 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_med_n600\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_med_n600\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_both_n800 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_both_n800", "results": []}]}
th135/medalpaca-7b_both_n800
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:06:49+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_both_n800 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_both_n800\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_both_n800\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_gen_n200 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_gen_n200", "results": []}]}
th135/medalpaca-13b_gen_n200
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:06:56+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_gen_n200 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_gen_n200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_gen_n200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_gen_n900 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_gen_n900", "results": []}]}
th135/meditron-7b_gen_n900
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:07:02+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_gen_n900 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_gen_n900\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_gen_n900\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
transformers
# Uploaded model - **Developed by:** L33tcode - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
L33tcode/Llama-3-mental-health-v1
null
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:07:03+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: L33tcode - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: L33tcode\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: L33tcode\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_both_n600 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_both_n600", "results": []}]}
th135/medalpaca-7b_both_n600
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:07:07+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_both_n600 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_both_n600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_both_n600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
transformers
# NikolayKozloff/Volare-Q6_K-GGUF This model was converted to GGUF format from [`MoxoffSpA/Volare`](https://huggingface.co/MoxoffSpA/Volare) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MoxoffSpA/Volare) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo NikolayKozloff/Volare-Q6_K-GGUF --model volare.Q6_K.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo NikolayKozloff/Volare-Q6_K-GGUF --model volare.Q6_K.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m volare.Q6_K.gguf -n 128 ```
{"language": ["it", "en"], "license": "mit", "library_name": "transformers", "tags": ["sft", "it", "gemma", "chatml", "llama-cpp", "gguf-my-repo"]}
NikolayKozloff/Volare-Q6_K-GGUF
null
[ "transformers", "gguf", "sft", "it", "gemma", "chatml", "llama-cpp", "gguf-my-repo", "en", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:07:15+00:00
[]
[ "it", "en" ]
TAGS #transformers #gguf #sft #it #gemma #chatml #llama-cpp #gguf-my-repo #en #license-mit #endpoints_compatible #region-us
# NikolayKozloff/Volare-Q6_K-GGUF This model was converted to GGUF format from 'MoxoffSpA/Volare' using URL via the URL's GGUF-my-repo space. Refer to the original model card for more details on the model. ## Use with URL Install URL through brew. Invoke the URL server or the CLI. CLI: Server: Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
[ "# NikolayKozloff/Volare-Q6_K-GGUF\nThis model was converted to GGUF format from 'MoxoffSpA/Volare' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
[ "TAGS\n#transformers #gguf #sft #it #gemma #chatml #llama-cpp #gguf-my-repo #en #license-mit #endpoints_compatible #region-us \n", "# NikolayKozloff/Volare-Q6_K-GGUF\nThis model was converted to GGUF format from 'MoxoffSpA/Volare' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.", "## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well." ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_med_n900 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_med_n900", "results": []}]}
th135/medalpaca-7b_med_n900
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:07:18+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_med_n900 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_med_n900\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_med_n900\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_med_n800 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_med_n800", "results": []}]}
th135/meditron-7b_med_n800
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:07:23+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_med_n800 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_med_n800\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_med_n800\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_both_n1000 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_both_n1000", "results": []}]}
th135/medalpaca-7b_both_n1000
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:07:26+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_both_n1000 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_both_n1000\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_both_n1000\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_med_n200 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_med_n200", "results": []}]}
th135/medalpaca-13b_med_n200
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:07:28+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_med_n200 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_med_n200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_med_n200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_gen_n900 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_gen_n900", "results": []}]}
th135/medalpaca-7b_gen_n900
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:07:35+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_gen_n900 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_gen_n900\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_gen_n900\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_both_n1200 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_both_n1200", "results": []}]}
th135/meditron-7b_both_n1200
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:07:43+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_both_n1200 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_both_n1200\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_both_n1200\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_both_n1400 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_both_n1400", "results": []}]}
th135/meditron-7b_both_n1400
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:07:54+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_both_n1400 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_both_n1400\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_both_n1400\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_both_n1600 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_both_n1600", "results": []}]}
th135/meditron-7b_both_n1600
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:08:01+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_both_n1600 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_both_n1600\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_both_n1600\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_both_n1200 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_both_n1200", "results": []}]}
th135/medalpaca-7b_both_n1200
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:08:02+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_both_n1200 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_both_n1200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_both_n1200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_both_n1400 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_both_n1400", "results": []}]}
th135/medalpaca-7b_both_n1400
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:08:22+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_both_n1400 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_both_n1400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_both_n1400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_both_n200 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_both_n200", "results": []}]}
th135/medalpaca-13b_both_n200
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:08:36+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_both_n200 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_both_n200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_both_n200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_gen_n400 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_gen_n400", "results": []}]}
th135/medalpaca-13b_gen_n400
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:08:42+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_gen_n400 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_gen_n400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_gen_n400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
text-generation
transformers
AI Model Name: Llama 3 8B "Built with Meta Llama 3" https://llama.meta.com/llama3/license/ Full walkthrough to reproduce these results here: https://github.com/catid/AQLM/blob/main/catid_readme.md Baseline evaluation results: ``` hf (pretrained=meta-llama/Meta-Llama-3-8B-Instruct), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 16 | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |-------------|------:|------|-----:|--------|-----:|---|-----:| |winogrande | 1|none | 0|acc |0.7198|± |0.0126| |piqa | 1|none | 0|acc |0.7873|± |0.0095| | | |none | 0|acc_norm|0.7867|± |0.0096| |hellaswag | 1|none | 0|acc |0.5767|± |0.0049| | | |none | 0|acc_norm|0.7585|± |0.0043| |arc_easy | 1|none | 0|acc |0.8140|± |0.0080| | | |none | 0|acc_norm|0.7971|± |0.0083| |arc_challenge| 1|none | 0|acc |0.5290|± |0.0146| | | |none | 0|acc_norm|0.5674|± |0.0145| ``` This repo evaluation results (AQLM with global fine-tuning): ``` hf (pretrained=catid/cat-llama-3-8b-instruct-aqlm), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: 16 | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |-------------|------:|------|-----:|--------|-----:|---|-----:| |winogrande | 1|none | 0|acc |0.7119|± |0.0127| |piqa | 1|none | 0|acc |0.7807|± |0.0097| | | |none | 0|acc_norm|0.7824|± |0.0096| |hellaswag | 1|none | 0|acc |0.5716|± |0.0049| | | |none | 0|acc_norm|0.7539|± |0.0043| |arc_easy | 1|none | 0|acc |0.8152|± |0.0080| | | |none | 0|acc_norm|0.7866|± |0.0084| |arc_challenge| 1|none | 0|acc |0.5043|± |0.0146| | | |none | 0|acc_norm|0.5555|± |0.0145| ``` To reproduce evaluation results: ```bash git clone https://github.com/EleutherAI/lm-evaluation-harness cd lm-evaluation-harness conda create -n lmeval python=3.10 -y && conda activate lmeval pip install -e . pip install accelerate aqlm"[gpu,cpu]" accelerate launch lm_eval --model hf \ --model_args pretrained=catid/cat-llama-3-8b-instruct-aqlm \ --tasks winogrande,piqa,hellaswag,arc_easy,arc_challenge \ --batch_size 16 ``` You can run this model as a `transformers` model using https://github.com/oobabooga/text-generation-webui
{}
catid/cat-llama-3-8b-instruct-aqlm
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T20:08:44+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
AI Model Name: Llama 3 8B "Built with Meta Llama 3" URL Full walkthrough to reproduce these results here: URL Baseline evaluation results: This repo evaluation results (AQLM with global fine-tuning): To reproduce evaluation results: You can run this model as a 'transformers' model using URL
[]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n" ]
image-classification
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # agusg6/img_classifier_2CLA_augmentation_1 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.4705 - Validation Loss: 0.4526 - Train Accuracy: 0.8108 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 22190, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.5453 | 0.5995 | 0.6865 | 0 | | 0.5190 | 0.5308 | 0.7505 | 1 | | 0.5038 | 0.4867 | 0.7847 | 2 | | 0.4928 | 0.4781 | 0.7928 | 3 | | 0.4705 | 0.4526 | 0.8108 | 4 | ### Framework versions - Transformers 4.39.3 - TensorFlow 2.16.1 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "google/vit-base-patch16-224-in21k", "model-index": [{"name": "agusg6/img_classifier_2CLA_augmentation_1", "results": []}]}
agusg6/img_classifier_2CLA_augmentation_1
null
[ "transformers", "tf", "vit", "image-classification", "generated_from_keras_callback", "base_model:google/vit-base-patch16-224-in21k", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:09:09+00:00
[]
[]
TAGS #transformers #tf #vit #image-classification #generated_from_keras_callback #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
agusg6/img\_classifier\_2CLA\_augmentation\_1 ============================================= This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 0.4705 * Validation Loss: 0.4526 * Train Accuracy: 0.8108 * Epoch: 4 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': {'module': 'keras.optimizers.schedules', 'class\_name': 'PolynomialDecay', 'config': {'initial\_learning\_rate': 3e-05, 'decay\_steps': 22190, 'end\_learning\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\_name': None}, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\_decay\_rate': 0.01} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.39.3 * TensorFlow 2.16.1 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 3e-05, 'decay\\_steps': 22190, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* TensorFlow 2.16.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tf #vit #image-classification #generated_from_keras_callback #base_model-google/vit-base-patch16-224-in21k #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': {'module': 'keras.optimizers.schedules', 'class\\_name': 'PolynomialDecay', 'config': {'initial\\_learning\\_rate': 3e-05, 'decay\\_steps': 22190, 'end\\_learning\\_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered\\_name': None}, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* TensorFlow 2.16.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_gen_n600 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_gen_n600", "results": []}]}
th135/medalpaca-13b_gen_n600
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:09:14+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_gen_n600 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_gen_n600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_gen_n600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # meditron-7b_both_n1800 This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "llama2", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b_both_n1800", "results": []}]}
th135/meditron-7b_both_n1800
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:epfl-llm/meditron-7b", "license:llama2", "region:us" ]
null
2024-04-21T20:09:23+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us
# meditron-7b_both_n1800 This model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# meditron-7b_both_n1800\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-epfl-llm/meditron-7b #license-llama2 #region-us \n", "# meditron-7b_both_n1800\n\nThis model is a fine-tuned version of epfl-llm/meditron-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_both_n1800 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_both_n1800", "results": []}]}
th135/medalpaca-7b_both_n1800
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:09:27+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_both_n1800 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_both_n1800\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_both_n1800\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_both_n400 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_both_n400", "results": []}]}
th135/medalpaca-13b_both_n400
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:09:30+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_both_n400 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_both_n400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_both_n400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_med_n600 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_med_n600", "results": []}]}
th135/medalpaca-13b_med_n600
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:09:33+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_med_n600 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_med_n600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_med_n600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_both_n1600 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_both_n1600", "results": []}]}
th135/medalpaca-7b_both_n1600
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:09:33+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_both_n1600 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_both_n1600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_both_n1600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_gen_n800 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_gen_n800", "results": []}]}
th135/medalpaca-13b_gen_n800
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:09:46+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_gen_n800 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_gen_n800\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_gen_n800\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
VisionForge/Alien-8B-v1.6
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T20:09:51+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_both_n600 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_both_n600", "results": []}]}
th135/medalpaca-13b_both_n600
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:10:09+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_both_n600 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_both_n600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_both_n600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_gen_n900 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_gen_n900", "results": []}]}
th135/medalpaca-13b_gen_n900_run1
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:10:25+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_gen_n900 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_gen_n900\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_gen_n900\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_med_n800 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_med_n800", "results": []}]}
th135/medalpaca-13b_med_n800
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:10:51+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_med_n800 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_med_n800\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_med_n800\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_med_n900 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_med_n900", "results": []}]}
th135/medalpaca-13b_med_n900
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:11:10+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_med_n900 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_med_n900\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_med_n900\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_ablation_4iters_bs256_iter_1 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.001_ablation_4iters_bs256_iter_1", "results": []}]}
ShenaoZ/0.001_ablation_4iters_bs256_iter_1
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T20:11:11+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.001_ablation_4iters_bs256_iter_1 This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.001_ablation_4iters_bs256_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.001_ablation_4iters_bs256_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
savinda99/roberta-hate-speech-dynabench-r4-target-advasary-debiased-without-dialect
null
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:11:24+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_both_n800 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_both_n800", "results": []}]}
th135/medalpaca-13b_both_n800_run1
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:11:28+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_both_n800 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_both_n800\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_both_n800\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_both_n1000 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_both_n1000", "results": []}]}
th135/medalpaca-13b_both_n1000
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:12:48+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_both_n1000 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_both_n1000\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_both_n1000\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_both_n1200 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_both_n1200", "results": []}]}
th135/medalpaca-13b_both_n1200
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:13:29+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_both_n1200 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_both_n1200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_both_n1200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_both_n1400 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_both_n1400", "results": []}]}
th135/medalpaca-13b_both_n1400
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:13:47+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_both_n1400 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_both_n1400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_both_n1400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
text-generation
transformers
# Meta-Llama-Guard-2-8B-GGUF Quantized Meta-Llama-Guard-2-8B models using recent versions of llama.cpp.
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["llama", "llama-3"], "model_name": "Meta-Llama-Guard-2-8B", "base_model": "meta-llama/Meta-Llama-Guard-2-8B", "pipeline_tag": "text-generation", "model_creator": "meta-llama", "model_type": "Llama3", "inference": false}
jpodivin/Meta-Llama-Guard-2-8B-GGUF
null
[ "transformers", "gguf", "llama", "llama-3", "text-generation", "en", "base_model:meta-llama/Meta-Llama-Guard-2-8B", "license:other", "region:us" ]
null
2024-04-21T20:14:00+00:00
[]
[ "en" ]
TAGS #transformers #gguf #llama #llama-3 #text-generation #en #base_model-meta-llama/Meta-Llama-Guard-2-8B #license-other #region-us
# Meta-Llama-Guard-2-8B-GGUF Quantized Meta-Llama-Guard-2-8B models using recent versions of URL.
[ "# Meta-Llama-Guard-2-8B-GGUF\n\nQuantized Meta-Llama-Guard-2-8B models using recent versions of URL." ]
[ "TAGS\n#transformers #gguf #llama #llama-3 #text-generation #en #base_model-meta-llama/Meta-Llama-Guard-2-8B #license-other #region-us \n", "# Meta-Llama-Guard-2-8B-GGUF\n\nQuantized Meta-Llama-Guard-2-8B models using recent versions of URL." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
engine58848/cxnlrcqw_sn2_68
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T20:14:01+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-small-swahili This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6037 - eval_wer: 40.4120 - eval_runtime: 2898.7871 - eval_samples_per_second: 3.532 - eval_steps_per_second: 0.442 - epoch: 1.5625 - step: 1000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_11_0"], "base_model": "openai/whisper-small", "model-index": [{"name": "whisper-small-swahili", "results": []}]}
diane20000000000000/whisper-small-swahili
null
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_11_0", "base_model:openai/whisper-small", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:14:20+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us
# whisper-small-swahili This model is a fine-tuned version of openai/whisper-small on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - eval_loss: 0.6037 - eval_wer: 40.4120 - eval_runtime: 2898.7871 - eval_samples_per_second: 3.532 - eval_steps_per_second: 0.442 - epoch: 1.5625 - step: 1000 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.40.0 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.19.1
[ "# whisper-small-swahili\n\nThis model is a fine-tuned version of openai/whisper-small on the common_voice_11_0 dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.6037\n- eval_wer: 40.4120\n- eval_runtime: 2898.7871\n- eval_samples_per_second: 3.532\n- eval_steps_per_second: 0.442\n- epoch: 1.5625\n- step: 1000", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- training_steps: 4000\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
[ "TAGS\n#transformers #tensorboard #safetensors #whisper #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_11_0 #base_model-openai/whisper-small #license-apache-2.0 #endpoints_compatible #region-us \n", "# whisper-small-swahili\n\nThis model is a fine-tuned version of openai/whisper-small on the common_voice_11_0 dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 0.6037\n- eval_wer: 40.4120\n- eval_runtime: 2898.7871\n- eval_samples_per_second: 3.532\n- eval_steps_per_second: 0.442\n- epoch: 1.5625\n- step: 1000", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- training_steps: 4000\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_both_n1600 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_both_n1600", "results": []}]}
th135/medalpaca-13b_both_n1600
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:15:06+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_both_n1600 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_both_n1600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_both_n1600\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_english_to_nepali_tst This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0789 - Bleu: 13.3322 - Gen Len: 30.295 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 125 | 2.1378 | 10.6717 | 30.905 | | No log | 2.0 | 250 | 2.1030 | 12.5862 | 31.295 | | No log | 3.0 | 375 | 2.0842 | 12.9723 | 29.675 | | 2.0989 | 4.0 | 500 | 2.0782 | 13.1558 | 30.38 | | 2.0989 | 5.0 | 625 | 2.0789 | 13.3322 | 30.295 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "facebook/nllb-200-distilled-600M", "model-index": [{"name": "my_awesome_english_to_nepali_tst", "results": []}]}
rujengelal/my_awesome_english_to_nepali_tst
null
[ "transformers", "tensorboard", "safetensors", "m2m_100", "text2text-generation", "generated_from_trainer", "base_model:facebook/nllb-200-distilled-600M", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:15:12+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #m2m_100 #text2text-generation #generated_from_trainer #base_model-facebook/nllb-200-distilled-600M #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us
my\_awesome\_english\_to\_nepali\_tst ===================================== This model is a fine-tuned version of facebook/nllb-200-distilled-600M on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.0789 * Bleu: 13.3322 * Gen Len: 30.295 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 5 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #m2m_100 #text2text-generation #generated_from_trainer #base_model-facebook/nllb-200-distilled-600M #license-cc-by-nc-4.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Uploaded model - **Developed by:** feliphe-galiza - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
feliphe-galiza/llama-3-1B.italian-hypernyms
null
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:15:23+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
# Uploaded model - Developed by: feliphe-galiza - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: feliphe-galiza\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #text-generation-inference #unsloth #llama #trl #en #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: feliphe-galiza\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.001_ablation_5iters_bs256_iter_1 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.001_ablation_5iters_bs256_iter_1", "results": []}]}
ShenaoZ/0.001_ablation_5iters_bs256_iter_1
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T20:15:37+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.001_ablation_5iters_bs256_iter_1 This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.001_ablation_5iters_bs256_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.001_ablation_5iters_bs256_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0_ablation_6iters_iter_1 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.0_ablation_6iters_iter_1", "results": []}]}
ZhangShenao/0.0_ablation_6iters_iter_1
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T20:15:48+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.0_ablation_6iters_iter_1 This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.0_ablation_6iters_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.0_ablation_6iters_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
text-generation
transformers
<img src="./llama-3-merges.webp" alt="Goku 8x22B v0.1 Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Llama-3-11B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
{"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["mergekit", "merge", "facebook", "meta", "pytorch", "llama", "llama-3"], "model_name": "Llama-3-11B", "base_model": "meta-llama/Meta-Llama-3-8B", "pipeline_tag": "text-generation", "license_name": "llama3", "license_link": "LICENSE", "inference": false, "model_creator": "MaziyarPanahi", "quantized_by": "MaziyarPanahi"}
MaziyarPanahi/Llama-3-11B
null
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "facebook", "meta", "pytorch", "llama-3", "en", "base_model:meta-llama/Meta-Llama-3-8B", "license:other", "autotrain_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T20:17:06+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #mergekit #merge #facebook #meta #pytorch #llama-3 #en #base_model-meta-llama/Meta-Llama-3-8B #license-other #autotrain_compatible #text-generation-inference #region-us
<img src="./URL" alt="Goku 8x22B v0.1 Logo" width="500" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Llama-3-11B This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the passthrough merge method. ### Models Merged The following models were included in the merge: * meta-llama/Meta-Llama-3-8B
[ "# Llama-3-11B\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #mergekit #merge #facebook #meta #pytorch #llama-3 #en #base_model-meta-llama/Meta-Llama-3-8B #license-other #autotrain_compatible #text-generation-inference #region-us \n", "# Llama-3-11B\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the passthrough merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* meta-llama/Meta-Llama-3-8B" ]
text-generation
transformers
# Uploaded model - **Developed by:** kevinkawchak - **License:** llama3 - **Finetuned from model :** meta-llama/Meta-Llama-3-8B-Instruct - **Finetuned using dataset :** yahma/alpaca-cleaned [Cover Image](https://drive.google.com/file/d/1lYHv7r_2CxuT4DQjd-3lkKfNZiJi_SDk/view?usp=sharing). [META LLAMA 3 COMMUNITY LICENSE AGREEMENT](https://llama.meta.com/llama3/license/). Built with Meta Llama 3. <br> The Meta-Llama-3-8B-Instruct Large Language Model was fine-tuned using the alpaca-cleaned dataset and varying LoRA values. LoRA experiments were run to affect model size and performance, with the top model being uploaded to Hugging Face as a 'Text generation' model: kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-R1. (a) Open-source Llama 3 8 Billion parameter model was released April 18, 2024, and it's selection was based on several highly functional existing Llama 2 Medical applications, an increase in Llama 3 downloads by developers, and Meta's growing infrastructure to support Llama improvements. (b-e) Low-Rank Adaptation, also known as LoRA, makes fine-tuning LLMs easier by reducing the number of trainable parameters to produce lightweight and efficient models. LoRA was utilized by modifying matrix rank 'r' and alpha values. Run times were similar between experiments, however loss tended to favor lower values of rank, which corresponded to smaller model sizes. The most effective 'R1' model mentioned above was trained on a rank of 1 and alpha of 5, which performed better by loss than a rank of up to 5. Further testing of the model uploaded to Hugging Face is relevant to determine what benefits the finetuning has over the base Llama 3 8B Instruct model. (f) A recent Unsloth Llama 3 Colab Notebook, a Llama 3 discussion, and clarifications from Hugging Chat and ChatGPT allowed for the completion of the finetuned of the model using an NVIDIA 40GB A100. The remainder of the notebook experiments are avaialable on GitHub. (g-i) The yahma/alpaca-cleaned dataset used is an update to resolve issues regarding the original 2023 Stanford alpaca instruction dataset. Text generations answering several questions were of high quality and comparable to other leading GenAI platforms. Answers to the following questions are available on GitHub notebooks such as 'What is a famous university in San Francisco bay area?', and 'What are the DNA bases?' (j) A look ahead at datasets for Drug Discovery Generative AI will be covered on Thursday 04/25/2024. a) https://huggingface.co/kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-R1 <br> b) https://ollama.com/library/medllama2 <br> c) https://github.com/AIAnytime/Llama2-Medical-Chatbot <br> d) https://github.com/entbappy/End-to-end-Medical-Chatbot-using-Llama2 <br> e) https://engineering.fb.com/2024/03/12/data-center-engineering/building-metas-genai-infrastructure/ <br> f) https://snorkel.ai/lora-low-rank-adaptation-for-llms/ <br> g) https://colab.research.google.com/drive/1mPw6P52cERr93w3CMBiJjocdTnyPiKTX#scrollTo=6bZsfBuZDeCL <br> h) https://www.youtube.com/watch?v=aQmoog_s8HE&t=1s <br> i) https://github.com/kevinkawchak/Medical-Quantum-Machine-Learning/tree/main/Code/Hugging%20Face/Llama-3-8B-Instruct <br> j) https://huggingface.co/datasets/yahma/alpaca-cleaned <br> This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
{"language": ["en"], "license": "llama3", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "datasets": ["yahma/alpaca-cleaned"], "base_model": "meta-llama/Meta-Llama-3-8B-Instruct"}
kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-R1
null
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "conversational", "en", "dataset:yahma/alpaca-cleaned", "base_model:meta-llama/Meta-Llama-3-8B-Instruct", "license:llama3", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:17:07+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #dataset-yahma/alpaca-cleaned #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #region-us
# Uploaded model - Developed by: kevinkawchak - License: llama3 - Finetuned from model : meta-llama/Meta-Llama-3-8B-Instruct - Finetuned using dataset : yahma/alpaca-cleaned Cover Image. META LLAMA 3 COMMUNITY LICENSE AGREEMENT. Built with Meta Llama 3. <br> The Meta-Llama-3-8B-Instruct Large Language Model was fine-tuned using the alpaca-cleaned dataset and varying LoRA values. LoRA experiments were run to affect model size and performance, with the top model being uploaded to Hugging Face as a 'Text generation' model: kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-R1. (a) Open-source Llama 3 8 Billion parameter model was released April 18, 2024, and it's selection was based on several highly functional existing Llama 2 Medical applications, an increase in Llama 3 downloads by developers, and Meta's growing infrastructure to support Llama improvements. (b-e) Low-Rank Adaptation, also known as LoRA, makes fine-tuning LLMs easier by reducing the number of trainable parameters to produce lightweight and efficient models. LoRA was utilized by modifying matrix rank 'r' and alpha values. Run times were similar between experiments, however loss tended to favor lower values of rank, which corresponded to smaller model sizes. The most effective 'R1' model mentioned above was trained on a rank of 1 and alpha of 5, which performed better by loss than a rank of up to 5. Further testing of the model uploaded to Hugging Face is relevant to determine what benefits the finetuning has over the base Llama 3 8B Instruct model. (f) A recent Unsloth Llama 3 Colab Notebook, a Llama 3 discussion, and clarifications from Hugging Chat and ChatGPT allowed for the completion of the finetuned of the model using an NVIDIA 40GB A100. The remainder of the notebook experiments are avaialable on GitHub. (g-i) The yahma/alpaca-cleaned dataset used is an update to resolve issues regarding the original 2023 Stanford alpaca instruction dataset. Text generations answering several questions were of high quality and comparable to other leading GenAI platforms. Answers to the following questions are available on GitHub notebooks such as 'What is a famous university in San Francisco bay area?', and 'What are the DNA bases?' (j) A look ahead at datasets for Drug Discovery Generative AI will be covered on Thursday 04/25/2024. a) URL <br> b) URL <br> c) URL <br> d) URL <br> e) URL <br> f) URL <br> g) URL <br> h) URL <br> i) URL <br> j) URL <br> This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/>
[ "# Uploaded model\n\n- Developed by: kevinkawchak\n- License: llama3\n- Finetuned from model : meta-llama/Meta-Llama-3-8B-Instruct\n- Finetuned using dataset : yahma/alpaca-cleaned\n\nCover Image. META LLAMA 3 COMMUNITY LICENSE AGREEMENT. Built with Meta Llama 3. <br>\n\nThe Meta-Llama-3-8B-Instruct Large Language Model was fine-tuned using the alpaca-cleaned dataset and varying LoRA values. LoRA experiments were run to affect model size and performance, with the top model being uploaded to Hugging Face as a 'Text generation' model: kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-R1. (a) \n\nOpen-source Llama 3 8 Billion parameter model was released April 18, 2024, and it's selection was based on several highly functional existing Llama 2 Medical applications, an increase in Llama 3 downloads by developers, and Meta's growing infrastructure to support Llama improvements. (b-e)\n\nLow-Rank Adaptation, also known as LoRA, makes fine-tuning LLMs easier by reducing the number of trainable parameters to produce lightweight and efficient models. LoRA was utilized by modifying matrix rank 'r' and alpha values. Run times were similar between experiments, however loss tended to favor lower values of rank, which corresponded to smaller model sizes. The most effective 'R1' model mentioned above was trained on a rank of 1 and alpha of 5, which performed better by loss than a rank of up to 5. Further testing of the model uploaded to Hugging Face is relevant to determine what benefits the finetuning has over the base Llama 3 8B Instruct model. (f) \n\nA recent Unsloth Llama 3 Colab Notebook, a Llama 3 discussion, and clarifications from Hugging Chat and ChatGPT allowed for the completion of the finetuned of the model using an NVIDIA 40GB A100. The remainder of the notebook experiments are avaialable on GitHub. (g-i)\n\nThe yahma/alpaca-cleaned dataset used is an update to resolve issues regarding the original 2023 Stanford alpaca instruction dataset. Text generations answering several questions were of high quality and comparable to other leading GenAI platforms. Answers to the following questions are available on GitHub notebooks such as 'What is a famous university in San Francisco bay area?', and 'What are the DNA bases?' (j) A look ahead at datasets for Drug Discovery Generative AI will be covered on Thursday 04/25/2024. \n\na) URL <br>\nb) URL <br>\nc) URL <br>\nd) URL <br>\ne) URL <br>\nf) URL <br>\ng) URL <br>\nh) URL <br>\ni) URL <br>\nj) URL <br>\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #text-generation-inference #unsloth #trl #conversational #en #dataset-yahma/alpaca-cleaned #base_model-meta-llama/Meta-Llama-3-8B-Instruct #license-llama3 #autotrain_compatible #endpoints_compatible #region-us \n", "# Uploaded model\n\n- Developed by: kevinkawchak\n- License: llama3\n- Finetuned from model : meta-llama/Meta-Llama-3-8B-Instruct\n- Finetuned using dataset : yahma/alpaca-cleaned\n\nCover Image. META LLAMA 3 COMMUNITY LICENSE AGREEMENT. Built with Meta Llama 3. <br>\n\nThe Meta-Llama-3-8B-Instruct Large Language Model was fine-tuned using the alpaca-cleaned dataset and varying LoRA values. LoRA experiments were run to affect model size and performance, with the top model being uploaded to Hugging Face as a 'Text generation' model: kevinkawchak/Meta-Llama-3-8B-Instruct-LoRA-R1. (a) \n\nOpen-source Llama 3 8 Billion parameter model was released April 18, 2024, and it's selection was based on several highly functional existing Llama 2 Medical applications, an increase in Llama 3 downloads by developers, and Meta's growing infrastructure to support Llama improvements. (b-e)\n\nLow-Rank Adaptation, also known as LoRA, makes fine-tuning LLMs easier by reducing the number of trainable parameters to produce lightweight and efficient models. LoRA was utilized by modifying matrix rank 'r' and alpha values. Run times were similar between experiments, however loss tended to favor lower values of rank, which corresponded to smaller model sizes. The most effective 'R1' model mentioned above was trained on a rank of 1 and alpha of 5, which performed better by loss than a rank of up to 5. Further testing of the model uploaded to Hugging Face is relevant to determine what benefits the finetuning has over the base Llama 3 8B Instruct model. (f) \n\nA recent Unsloth Llama 3 Colab Notebook, a Llama 3 discussion, and clarifications from Hugging Chat and ChatGPT allowed for the completion of the finetuned of the model using an NVIDIA 40GB A100. The remainder of the notebook experiments are avaialable on GitHub. (g-i)\n\nThe yahma/alpaca-cleaned dataset used is an update to resolve issues regarding the original 2023 Stanford alpaca instruction dataset. Text generations answering several questions were of high quality and comparable to other leading GenAI platforms. Answers to the following questions are available on GitHub notebooks such as 'What is a famous university in San Francisco bay area?', and 'What are the DNA bases?' (j) A look ahead at datasets for Drug Discovery Generative AI will be covered on Thursday 04/25/2024. \n\na) URL <br>\nb) URL <br>\nc) URL <br>\nd) URL <br>\ne) URL <br>\nf) URL <br>\ng) URL <br>\nh) URL <br>\ni) URL <br>\nj) URL <br>\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
daisysxm76/GPT2-XL-v2-DPO
null
[ "transformers", "safetensors", "gpt2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T20:17:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gpt2 #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # 0.0_ablation_5iters_iter_1 This model is a fine-tuned version of [HuggingFaceH4/mistral-7b-sft-beta](https://huggingface.co/HuggingFaceH4/mistral-7b-sft-beta) on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "HuggingFaceH4/mistral-7b-sft-beta", "model-index": [{"name": "0.0_ablation_5iters_iter_1", "results": []}]}
ZhangShenao/0.0_ablation_5iters_iter_1
null
[ "transformers", "safetensors", "mistral", "text-generation", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "conversational", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:HuggingFaceH4/mistral-7b-sft-beta", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T20:18:05+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# 0.0_ablation_5iters_iter_1 This model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.36.2 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.15.2
[ "# 0.0_ablation_5iters_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #generated_from_trainer #trl #dpo #conversational #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-HuggingFaceH4/mistral-7b-sft-beta #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# 0.0_ablation_5iters_iter_1\n\nThis model is a fine-tuned version of HuggingFaceH4/mistral-7b-sft-beta on the HuggingFaceH4/ultrafeedback_binarized dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-07\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- distributed_type: multi-GPU\n- num_devices: 8\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 256\n- total_eval_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_ratio: 0.1\n- num_epochs: 1", "### Training results", "### Framework versions\n\n- Transformers 4.36.2\n- Pytorch 2.1.2+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.2" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
SkwarczynskiP/bert-base-uncased-finetuned-vedantgaur-AI-and-human-generated
null
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-21T20:19:14+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="AishwaryaDixit/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-FrozenLake-v1-4x4-noSlippery", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "FrozenLake-v1-4x4-no_slippery", "type": "FrozenLake-v1-4x4-no_slippery"}, "metrics": [{"type": "mean_reward", "value": "1.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
AishwaryaDixit/q-FrozenLake-v1-4x4-noSlippery
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-21T20:20:42+00:00
[]
[]
TAGS #FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 FrozenLake-v1 This is a trained model of a Q-Learning agent playing FrozenLake-v1 . ## Usage
[ "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
[ "TAGS\n#FrozenLake-v1-4x4-no_slippery #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 FrozenLake-v1\n This is a trained model of a Q-Learning agent playing FrozenLake-v1 .\n\n ## Usage" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
heyllm234/sc58
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:21:56+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
savinda99/roberta-hate-speech-dynabench-r4-target-advasary-debiased
null
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:22:11+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #roberta #text-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
lklimkiewicz/midi-ganerator-game
null
[ "transformers", "safetensors", "opt", "feature-extraction", "custom_code", "arxiv:1910.09700", "text-generation-inference", "region:us" ]
null
2024-04-21T20:23:02+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #opt #feature-extraction #custom_code #arxiv-1910.09700 #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #opt #feature-extraction #custom_code #arxiv-1910.09700 #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
object-detection
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # detr This model is a fine-tuned version of [facebook/detr-resnet-50](https://huggingface.co/facebook/detr-resnet-50) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.2889 - eval_runtime: 145.9792 - eval_samples_per_second: 6.85 - eval_steps_per_second: 0.856 - epoch: 0.64 - step: 200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "facebook/detr-resnet-50", "model-index": [{"name": "detr", "results": []}]}
Karl-Erik/detr
null
[ "transformers", "tensorboard", "safetensors", "detr", "object-detection", "generated_from_trainer", "base_model:facebook/detr-resnet-50", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:23:09+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us
# detr This model is a fine-tuned version of facebook/detr-resnet-50 on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.2889 - eval_runtime: 145.9792 - eval_samples_per_second: 6.85 - eval_steps_per_second: 0.856 - epoch: 0.64 - step: 200 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
[ "# detr\n\nThis model is a fine-tuned version of facebook/detr-resnet-50 on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 1.2889\n- eval_runtime: 145.9792\n- eval_samples_per_second: 6.85\n- eval_steps_per_second: 0.856\n- epoch: 0.64\n- step: 200", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #detr #object-detection #generated_from_trainer #base_model-facebook/detr-resnet-50 #license-apache-2.0 #endpoints_compatible #region-us \n", "# detr\n\nThis model is a fine-tuned version of facebook/detr-resnet-50 on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- eval_loss: 1.2889\n- eval_runtime: 145.9792\n- eval_samples_per_second: 6.85\n- eval_steps_per_second: 0.856\n- epoch: 0.64\n- step: 200", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 32\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2\n- mixed_precision_training: Native AMP", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
nmdr/Mistral-7B-Instruct-v0.2-platypus-1k-buzzlightyear
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T20:24:46+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
RobertML/sn66
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:25:51+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-7b_med_n200 This model is a fine-tuned version of [medalpaca/medalpaca-7b](https://huggingface.co/medalpaca/medalpaca-7b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-7b", "model-index": [{"name": "medalpaca-7b_med_n200", "results": []}]}
th135/medalpaca-7b_med_n200
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-7b", "license:cc", "region:us" ]
null
2024-04-21T20:27:11+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us
# medalpaca-7b_med_n200 This model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-7b_med_n200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-7b #license-cc #region-us \n", "# medalpaca-7b_med_n200\n\nThis model is a fine-tuned version of medalpaca/medalpaca-7b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # Textual inversion text2image fine-tuning - averma332/textual_inversion_melanoma_embeddings_benign These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "creativeml-openrail-m", "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "diffusers", "textual_inversion", "diffusers-training"], "base_model": "runwayml/stable-diffusion-v1-5", "inference": true}
averma332/textual_inversion_melanoma_embeddings_benign
null
[ "diffusers", "tensorboard", "safetensors", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "textual_inversion", "diffusers-training", "base_model:runwayml/stable-diffusion-v1-5", "license:creativeml-openrail-m", "endpoints_compatible", "diffusers:StableDiffusionPipeline", "region:us" ]
null
2024-04-21T20:28:09+00:00
[]
[]
TAGS #diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #textual_inversion #diffusers-training #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us
# Textual inversion text2image fine-tuning - averma332/textual_inversion_melanoma_embeddings_benign These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# Textual inversion text2image fine-tuning - averma332/textual_inversion_melanoma_embeddings_benign\nThese are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #tensorboard #safetensors #stable-diffusion #stable-diffusion-diffusers #text-to-image #textual_inversion #diffusers-training #base_model-runwayml/stable-diffusion-v1-5 #license-creativeml-openrail-m #endpoints_compatible #diffusers-StableDiffusionPipeline #region-us \n", "# Textual inversion text2image fine-tuning - averma332/textual_inversion_melanoma_embeddings_benign\nThese are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
text-to-speech
speechbrain
# Text-to-Speech (TTS) with Transformer trained on LJSpeech This repository provides all the necessary tools for Text-to-Speech (TTS) with SpeechBrain using a [Transformer](https://arxiv.org/pdf/1809.08895.pdf) pretrained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/). The pre-trained model takes in text input and produces a spectrogram in output. One can get the final waveform by applying a vocoder (e.g., HiFIGAN) on top of the generated spectrogram. ### Perform Text-to-Speech (TTS) ```python import torchaudio from speechbrain.inference.vocoders import HIFIGAN texts = ["This is the example text"] #initializing my model my_tts_model = TextToSpeech.from_hparams(source="/content/") #initializing vocoder(Hifigan) model hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-ljspeech", savedir="tmpdir_vocoder") # Running the TTS mel_output = my_tts_model.encode_text(texts) # Running Vocoder (spectrogram-to-waveform) waveforms = hifi_gan.decode_batch(mel_output) # Save the waverform torchaudio.save('example_TTS.wav',waveforms.squeeze(1), 22050) ```
{"language": "en", "license": "apache-2.0", "library_name": "speechbrain", "tags": ["text-to-speech", "TTS", "speech-synthesis", "speechbrain"], "datasets": ["LJSpeech"], "pipeline_tag": "text-to-speech"}
Sharanya186/TextToSpeech
null
[ "speechbrain", "text-to-speech", "TTS", "speech-synthesis", "en", "dataset:LJSpeech", "arxiv:1809.08895", "license:apache-2.0", "region:us" ]
null
2024-04-21T20:28:16+00:00
[ "1809.08895" ]
[ "en" ]
TAGS #speechbrain #text-to-speech #TTS #speech-synthesis #en #dataset-LJSpeech #arxiv-1809.08895 #license-apache-2.0 #region-us
# Text-to-Speech (TTS) with Transformer trained on LJSpeech This repository provides all the necessary tools for Text-to-Speech (TTS) with SpeechBrain using a Transformer pretrained on LJSpeech. The pre-trained model takes in text input and produces a spectrogram in output. One can get the final waveform by applying a vocoder (e.g., HiFIGAN) on top of the generated spectrogram. ### Perform Text-to-Speech (TTS)
[ "# Text-to-Speech (TTS) with Transformer trained on LJSpeech\n\nThis repository provides all the necessary tools for Text-to-Speech (TTS) with SpeechBrain using a Transformer pretrained on LJSpeech.\n\nThe pre-trained model takes in text input and produces a spectrogram in output. One can get the final waveform by applying a vocoder (e.g., HiFIGAN) on top of the generated spectrogram.", "### Perform Text-to-Speech (TTS)" ]
[ "TAGS\n#speechbrain #text-to-speech #TTS #speech-synthesis #en #dataset-LJSpeech #arxiv-1809.08895 #license-apache-2.0 #region-us \n", "# Text-to-Speech (TTS) with Transformer trained on LJSpeech\n\nThis repository provides all the necessary tools for Text-to-Speech (TTS) with SpeechBrain using a Transformer pretrained on LJSpeech.\n\nThe pre-trained model takes in text input and produces a spectrogram in output. One can get the final waveform by applying a vocoder (e.g., HiFIGAN) on top of the generated spectrogram.", "### Perform Text-to-Speech (TTS)" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # medalpaca-13b_med_n400 This model is a fine-tuned version of [medalpaca/medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
{"license": "cc", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "medalpaca/medalpaca-13b", "model-index": [{"name": "medalpaca-13b_med_n400", "results": []}]}
th135/medalpaca-13b_med_n400
null
[ "peft", "safetensors", "llama", "generated_from_trainer", "base_model:medalpaca/medalpaca-13b", "license:cc", "region:us" ]
null
2024-04-21T20:28:30+00:00
[]
[]
TAGS #peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us
# medalpaca-13b_med_n400 This model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.8.2 - Transformers 4.37.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.1
[ "# medalpaca-13b_med_n400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
[ "TAGS\n#peft #safetensors #llama #generated_from_trainer #base_model-medalpaca/medalpaca-13b #license-cc #region-us \n", "# medalpaca-13b_med_n400\n\nThis model is a fine-tuned version of medalpaca/medalpaca-13b on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 128\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 10\n- num_epochs: 3\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.2\n- Pytorch 2.1.2\n- Datasets 2.14.6\n- Tokenizers 0.15.1" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
domenicrosati/decoding_trust_attack_3e5
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T20:29:31+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
candenizkocak/Mistral-7B-finance_alpaca
null
[ "transformers", "safetensors", "mistral", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-21T20:29:39+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phi-2-gpo-renew2-b0.001-i1 This model is a fine-tuned version of [DUAL-GPO/phi-2-gpo-renew2-b0.001-i0](https://huggingface.co/DUAL-GPO/phi-2-gpo-renew2-b0.001-i0) on the HuggingFaceH4/ultrafeedback_binarized dataset. It achieves the following results on the evaluation set: - Loss: 0.0538 - Rewards/chosen: 0.0010 - Rewards/rejected: 0.0012 - Rewards/accuracies: 0.4290 - Rewards/margins: -0.0002 - Logps/rejected: -366.0280 - Logps/chosen: -395.2844 - Logits/rejected: -0.7463 - Logits/chosen: -0.8436 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Logits/chosen | Logits/rejected | Logps/chosen | Logps/rejected | Validation Loss | Rewards/accuracies | Rewards/chosen | Rewards/margins | Rewards/rejected | |:-------------:|:-----:|:----:|:-------------:|:---------------:|:------------:|:--------------:|:---------------:|:------------------:|:--------------:|:---------------:|:----------------:| | 0.1204 | 0.32 | 100 | -0.8372 | -0.7448 | -396.1279 | -367.0282 | 0.0537 | 0.4495 | 0.0002 | -0.0000 | 0.0002 | | 0.1673 | 0.64 | 200 | 0.0538 | 0.0013 | 0.0015 | 0.4305 | -0.0002 | -365.7495 | -395.0410 | -0.7569 | -0.8518 | | 0.1395 | 0.96 | 300 | 0.0538 | 0.0010 | 0.0012 | 0.4395 | -0.0002 | -365.9886 | -395.3006 | -0.7587 | -0.8541 | ### Framework versions - PEFT 0.7.1 - Transformers 4.36.2 - Pytorch 2.1.2 - Datasets 2.14.6 - Tokenizers 0.15.2
{"license": "mit", "library_name": "peft", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo"], "datasets": ["HuggingFaceH4/ultrafeedback_binarized"], "base_model": "microsoft/phi-2", "model-index": [{"name": "phi-2-gpo-renew2-b0.001-i1", "results": []}]}
DUAL-GPO/phi-2-gpo-renew2-b0.001-i1
null
[ "peft", "tensorboard", "safetensors", "phi", "alignment-handbook", "generated_from_trainer", "trl", "dpo", "custom_code", "dataset:HuggingFaceH4/ultrafeedback_binarized", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-04-21T20:30:11+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #phi #alignment-handbook #generated_from_trainer #trl #dpo #custom_code #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-microsoft/phi-2 #license-mit #region-us
phi-2-gpo-renew2-b0.001-i1 ========================== This model is a fine-tuned version of DUAL-GPO/phi-2-gpo-renew2-b0.001-i0 on the HuggingFaceH4/ultrafeedback\_binarized dataset. It achieves the following results on the evaluation set: * Loss: 0.0538 * Rewards/chosen: 0.0010 * Rewards/rejected: 0.0012 * Rewards/accuracies: 0.4290 * Rewards/margins: -0.0002 * Logps/rejected: -366.0280 * Logps/chosen: -395.2844 * Logits/rejected: -0.7463 * Logits/chosen: -0.8436 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 5e-06 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * distributed\_type: multi-GPU * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_ratio: 0.1 * num\_epochs: 1 ### Training results ### Framework versions * PEFT 0.7.1 * Transformers 4.36.2 * Pytorch 2.1.2 * Datasets 2.14.6 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #phi #alignment-handbook #generated_from_trainer #trl #dpo #custom_code #dataset-HuggingFaceH4/ultrafeedback_binarized #base_model-microsoft/phi-2 #license-mit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-06\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* PEFT 0.7.1\n* Transformers 4.36.2\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2" ]
null
transformers
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/6305880b2a359dee8a01dccd/kngQqS2VkDb5q5LEovp6v.webp) # BabyDolphin-8B-LLaMA3-Uncensored - **Developed by:** babycommando - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit This model, `babydolphin-8b-llama3-uncensored`, is an 8-billion parameter subset of the larger LLaMA (Large Language Model by Meta) and has been fine-tuned on the `cognitivecomputations/dolphin` dataset specifically for the FLAN1M-Alpaca-Uncensored tasks. It incorporates cutting-edge transformer architectures optimized for a balance between performance and efficiency. ## Model Description `babydolphin-8b-llama3-uncensored` is designed to deliver powerful language understanding and generation capabilities while ensuring compliance with non-censorship standards for diverse application scenarios. This version is ideal for applications requiring high-quality text generation where content restrictions are minimal. ### Technical Details - **Base Model**: LLaMA3 - **Parameters**: 8 billion - **Fine-tuning Dataset**: cognitivecomputations/dolphin FLAN1M-Alpaca-Uncensored ### Quantization and Configuration This model is available in multiple configurations to best suit different deployment needs: - **f16**: Fastest conversion, retains 100% accuracy but is slow and memory-intensive. - **q4_k_m**: Recommended for general use, balancing between speed and efficiency. - **q3_k_m**: Good for environments where model size and speed are more critical than detailed accuracy. - **q3_k_s**: Maximizes speed and minimizes model size, suitable for very resource-constrained environments. ## Intended Use This model is intended for researchers and developers needing advanced natural language processing capabilities without censorship restrictions. It is particularly well-suited for generating text in scenarios where nuanced, unrestricted content generation is crucial. ## How to Use For Ollama, check their docs for [running a GGUF model on Ollama](https://github.com/ollama/ollama/blob/main/docs/import.md) Here is how to load and use the model in your projects using Hugging Face Transformers: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "babycommando/babydolphin-8b-llama3-uncensored" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name) inputs = tokenizer("Hello, world!", return_tensors="pt") outputs = model.generate(inputs["input_ids"]) print(tokenizer.decode(outputs[0])) ``` ### Training Loss Over 60 Epochs ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6305880b2a359dee8a01dccd/RAakycn0OTxHHY26qZLX-.png) - This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) ## Usage Copy this markdown content into your model's page on the Hugging Face Model Hub to provide users with a clear, informative description of what your model can do and how it can be used. Adjust the `model_name` variable in the Python code snippet to reflect the actual path to your model on Hugging Face for ease of use by others.
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl", "sft", "llama3", "llama38b", "8b", "dolphin", "babydolphin"], "datasets": ["cognitivecomputations/dolphin"], "base_model": "unsloth/llama-3-8b-bnb-4bit"}
babycommando/babydolphin-8b-llama3-uncensored
null
[ "transformers", "pytorch", "safetensors", "gguf", "llama", "text-generation-inference", "unsloth", "trl", "sft", "llama3", "llama38b", "8b", "dolphin", "babydolphin", "en", "dataset:cognitivecomputations/dolphin", "base_model:unsloth/llama-3-8b-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-21T20:31:17+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #safetensors #gguf #llama #text-generation-inference #unsloth #trl #sft #llama3 #llama38b #8b #dolphin #babydolphin #en #dataset-cognitivecomputations/dolphin #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us
!image/webp # BabyDolphin-8B-LLaMA3-Uncensored - Developed by: babycommando - License: apache-2.0 - Finetuned from model : unsloth/llama-3-8b-bnb-4bit This model, 'babydolphin-8b-llama3-uncensored', is an 8-billion parameter subset of the larger LLaMA (Large Language Model by Meta) and has been fine-tuned on the 'cognitivecomputations/dolphin' dataset specifically for the FLAN1M-Alpaca-Uncensored tasks. It incorporates cutting-edge transformer architectures optimized for a balance between performance and efficiency. ## Model Description 'babydolphin-8b-llama3-uncensored' is designed to deliver powerful language understanding and generation capabilities while ensuring compliance with non-censorship standards for diverse application scenarios. This version is ideal for applications requiring high-quality text generation where content restrictions are minimal. ### Technical Details - Base Model: LLaMA3 - Parameters: 8 billion - Fine-tuning Dataset: cognitivecomputations/dolphin FLAN1M-Alpaca-Uncensored ### Quantization and Configuration This model is available in multiple configurations to best suit different deployment needs: - f16: Fastest conversion, retains 100% accuracy but is slow and memory-intensive. - q4_k_m: Recommended for general use, balancing between speed and efficiency. - q3_k_m: Good for environments where model size and speed are more critical than detailed accuracy. - q3_k_s: Maximizes speed and minimizes model size, suitable for very resource-constrained environments. ## Intended Use This model is intended for researchers and developers needing advanced natural language processing capabilities without censorship restrictions. It is particularly well-suited for generating text in scenarios where nuanced, unrestricted content generation is crucial. ## How to Use For Ollama, check their docs for running a GGUF model on Ollama Here is how to load and use the model in your projects using Hugging Face Transformers: ### Training Loss Over 60 Epochs !image/png - This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. <img src="URL width="200"/> ## Usage Copy this markdown content into your model's page on the Hugging Face Model Hub to provide users with a clear, informative description of what your model can do and how it can be used. Adjust the 'model_name' variable in the Python code snippet to reflect the actual path to your model on Hugging Face for ease of use by others.
[ "# BabyDolphin-8B-LLaMA3-Uncensored\n\n- Developed by: babycommando\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis model, 'babydolphin-8b-llama3-uncensored', is an 8-billion parameter subset of the larger LLaMA (Large Language Model by Meta) and has been fine-tuned on the 'cognitivecomputations/dolphin' dataset specifically for the FLAN1M-Alpaca-Uncensored tasks. It incorporates cutting-edge transformer architectures optimized for a balance between performance and efficiency.", "## Model Description\n\n'babydolphin-8b-llama3-uncensored' is designed to deliver powerful language understanding and generation capabilities while ensuring compliance with non-censorship standards for diverse application scenarios. This version is ideal for applications requiring high-quality text generation where content restrictions are minimal.", "### Technical Details\n\n- Base Model: LLaMA3\n- Parameters: 8 billion\n- Fine-tuning Dataset: cognitivecomputations/dolphin FLAN1M-Alpaca-Uncensored", "### Quantization and Configuration\n\nThis model is available in multiple configurations to best suit different deployment needs:\n\n- f16: Fastest conversion, retains 100% accuracy but is slow and memory-intensive.\n- q4_k_m: Recommended for general use, balancing between speed and efficiency.\n- q3_k_m: Good for environments where model size and speed are more critical than detailed accuracy.\n- q3_k_s: Maximizes speed and minimizes model size, suitable for very resource-constrained environments.", "## Intended Use\n\nThis model is intended for researchers and developers needing advanced natural language processing capabilities without censorship restrictions. It is particularly well-suited for generating text in scenarios where nuanced, unrestricted content generation is crucial.", "## How to Use\n\nFor Ollama, check their docs for running a GGUF model on Ollama\n\nHere is how to load and use the model in your projects using Hugging Face Transformers:", "### Training Loss Over 60 Epochs\n!image/png\n\n\n- \n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>", "## Usage\n\nCopy this markdown content into your model's page on the Hugging Face Model Hub to provide users with a clear, informative description of what your model can do and how it can be used. Adjust the 'model_name' variable in the Python code snippet to reflect the actual path to your model on Hugging Face for ease of use by others." ]
[ "TAGS\n#transformers #pytorch #safetensors #gguf #llama #text-generation-inference #unsloth #trl #sft #llama3 #llama38b #8b #dolphin #babydolphin #en #dataset-cognitivecomputations/dolphin #base_model-unsloth/llama-3-8b-bnb-4bit #license-apache-2.0 #endpoints_compatible #region-us \n", "# BabyDolphin-8B-LLaMA3-Uncensored\n\n- Developed by: babycommando\n- License: apache-2.0\n- Finetuned from model : unsloth/llama-3-8b-bnb-4bit\n\nThis model, 'babydolphin-8b-llama3-uncensored', is an 8-billion parameter subset of the larger LLaMA (Large Language Model by Meta) and has been fine-tuned on the 'cognitivecomputations/dolphin' dataset specifically for the FLAN1M-Alpaca-Uncensored tasks. It incorporates cutting-edge transformer architectures optimized for a balance between performance and efficiency.", "## Model Description\n\n'babydolphin-8b-llama3-uncensored' is designed to deliver powerful language understanding and generation capabilities while ensuring compliance with non-censorship standards for diverse application scenarios. This version is ideal for applications requiring high-quality text generation where content restrictions are minimal.", "### Technical Details\n\n- Base Model: LLaMA3\n- Parameters: 8 billion\n- Fine-tuning Dataset: cognitivecomputations/dolphin FLAN1M-Alpaca-Uncensored", "### Quantization and Configuration\n\nThis model is available in multiple configurations to best suit different deployment needs:\n\n- f16: Fastest conversion, retains 100% accuracy but is slow and memory-intensive.\n- q4_k_m: Recommended for general use, balancing between speed and efficiency.\n- q3_k_m: Good for environments where model size and speed are more critical than detailed accuracy.\n- q3_k_s: Maximizes speed and minimizes model size, suitable for very resource-constrained environments.", "## Intended Use\n\nThis model is intended for researchers and developers needing advanced natural language processing capabilities without censorship restrictions. It is particularly well-suited for generating text in scenarios where nuanced, unrestricted content generation is crucial.", "## How to Use\n\nFor Ollama, check their docs for running a GGUF model on Ollama\n\nHere is how to load and use the model in your projects using Hugging Face Transformers:", "### Training Loss Over 60 Epochs\n!image/png\n\n\n- \n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>", "## Usage\n\nCopy this markdown content into your model's page on the Hugging Face Model Hub to provide users with a clear, informative description of what your model can do and how it can be used. Adjust the 'model_name' variable in the Python code snippet to reflect the actual path to your model on Hugging Face for ease of use by others." ]
reinforcement-learning
null
# **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="AishwaryaDixit/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.52 +/- 2.74", "name": "mean_reward", "verified": false}]}]}]}
AishwaryaDixit/Taxi-v3
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
null
2024-04-21T20:31:47+00:00
[]
[]
TAGS #Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
# Q-Learning Agent playing1 Taxi-v3 This is a trained model of a Q-Learning agent playing Taxi-v3 . ## Usage
[ "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]
[ "TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n", "# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage" ]