pipeline_tag
stringclasses
48 values
library_name
stringclasses
198 values
text
stringlengths
1
900k
metadata
stringlengths
2
438k
id
stringlengths
5
122
last_modified
null
tags
sequencelengths
1
1.84k
sha
null
created_at
stringlengths
25
25
arxiv
sequencelengths
0
201
languages
sequencelengths
0
1.83k
tags_str
stringlengths
17
9.34k
text_str
stringlengths
0
389k
text_lists
sequencelengths
0
722
processed_texts
sequencelengths
1
723
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # apply_back_translation_model_v3 This model is a fine-tuned version of [vinai/bartpho-syllable-base](https://huggingface.co/vinai/bartpho-syllable-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7866 - Bleu: 9.5112 - Gen Len: 18.0821 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 1.958 | 1.0 | 15095 | 1.7866 | 9.5112 | 18.0821 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2
{"tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "vinai/bartpho-syllable-base", "model-index": [{"name": "apply_back_translation_model_v3", "results": []}]}
long292/apply_back_translation_model_v3
null
[ "transformers", "tensorboard", "safetensors", "mbart", "text2text-generation", "generated_from_trainer", "base_model:vinai/bartpho-syllable-base", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T15:14:08+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #mbart #text2text-generation #generated_from_trainer #base_model-vinai/bartpho-syllable-base #autotrain_compatible #endpoints_compatible #region-us
apply\_back\_translation\_model\_v3 =================================== This model is a fine-tuned version of vinai/bartpho-syllable-base on the None dataset. It achieves the following results on the evaluation set: * Loss: 1.7866 * Bleu: 9.5112 * Gen Len: 18.0821 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 1 * eval\_batch\_size: 1 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.1.2 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #mbart #text2text-generation #generated_from_trainer #base_model-vinai/bartpho-syllable-base #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 1\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-to-image
diffusers
# LoRA text2image fine-tuning - raman07/LR_0.0001 These are LoRA adaption weights for raman07/pixart-alpha-256x256. The weights were fine-tuned on the MIMIC-CXR dataset.
{"license": "creativeml-openrail-m", "tags": ["pixart-alpha", "medical-pixart-alpha", "text-to-image", "diffusers", "lora"], "base_model": "raman07/pixart-alpha-256x256", "inference": true}
raman07/LR_0.0001
null
[ "diffusers", "pixart-alpha", "medical-pixart-alpha", "text-to-image", "lora", "base_model:raman07/pixart-alpha-256x256", "license:creativeml-openrail-m", "region:us" ]
null
2024-04-17T15:17:21+00:00
[]
[]
TAGS #diffusers #pixart-alpha #medical-pixart-alpha #text-to-image #lora #base_model-raman07/pixart-alpha-256x256 #license-creativeml-openrail-m #region-us
# LoRA text2image fine-tuning - raman07/LR_0.0001 These are LoRA adaption weights for raman07/pixart-alpha-256x256. The weights were fine-tuned on the MIMIC-CXR dataset.
[ "# LoRA text2image fine-tuning - raman07/LR_0.0001\nThese are LoRA adaption weights for raman07/pixart-alpha-256x256. The weights were fine-tuned on the MIMIC-CXR dataset." ]
[ "TAGS\n#diffusers #pixart-alpha #medical-pixart-alpha #text-to-image #lora #base_model-raman07/pixart-alpha-256x256 #license-creativeml-openrail-m #region-us \n", "# LoRA text2image fine-tuning - raman07/LR_0.0001\nThese are LoRA adaption weights for raman07/pixart-alpha-256x256. The weights were fine-tuned on the MIMIC-CXR dataset." ]
token-classification
transformers
# Model Card for Model ID This model is finetuned to predict Personally Identifiable Information ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> recall: 0.8631984585741811, precision: 0.896, f5: 0.8644155844155844 - **Language(s) (NLP):** English - **Finetuned from model [optional]:** bert-base-uncased
{"library_name": "transformers", "tags": []}
zmilczarek/pii-detection-baseline-v0.2
null
[ "transformers", "safetensors", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T15:18:03+00:00
[]
[]
TAGS #transformers #safetensors #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID This model is finetuned to predict Personally Identifiable Information ## Model Details ### Model Description recall: 0.8631984585741811, precision: 0.896, f5: 0.8644155844155844 - Language(s) (NLP): English - Finetuned from model [optional]: bert-base-uncased
[ "# Model Card for Model ID\n\nThis model is finetuned to predict Personally Identifiable Information", "## Model Details", "### Model Description\n\n\n\nrecall: 0.8631984585741811, \nprecision: 0.896, \nf5: 0.8644155844155844\n\n- Language(s) (NLP): English\n- Finetuned from model [optional]: bert-base-uncased" ]
[ "TAGS\n#transformers #safetensors #bert #token-classification #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID\n\nThis model is finetuned to predict Personally Identifiable Information", "## Model Details", "### Model Description\n\n\n\nrecall: 0.8631984585741811, \nprecision: 0.896, \nf5: 0.8644155844155844\n\n- Language(s) (NLP): English\n- Finetuned from model [optional]: bert-base-uncased" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_hh_shp2_200 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0980 - Rewards/chosen: -2.6010 - Rewards/rejected: -3.0421 - Rewards/accuracies: 0.5700 - Rewards/margins: 0.4411 - Logps/rejected: -224.1422 - Logps/chosen: -244.7963 - Logits/rejected: -0.4942 - Logits/chosen: -0.5683 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.0 | 8.0 | 100 | 2.1281 | -2.3340 | -2.7556 | 0.5700 | 0.4216 | -223.8238 | -244.4996 | -0.4932 | -0.5692 | | 0.0 | 16.0 | 200 | 2.0805 | -2.4076 | -2.9037 | 0.5800 | 0.4961 | -223.9884 | -244.5814 | -0.4930 | -0.5685 | | 0.0 | 24.0 | 300 | 2.1050 | -2.5116 | -2.9546 | 0.5600 | 0.4430 | -224.0449 | -244.6970 | -0.4939 | -0.5685 | | 0.0 | 32.0 | 400 | 2.1003 | -2.5211 | -2.9879 | 0.5600 | 0.4667 | -224.0819 | -244.7076 | -0.4938 | -0.5683 | | 0.0 | 40.0 | 500 | 2.1098 | -2.5733 | -3.0310 | 0.5700 | 0.4576 | -224.1297 | -244.7656 | -0.4935 | -0.5677 | | 0.0 | 48.0 | 600 | 2.0969 | -2.5725 | -3.0456 | 0.5700 | 0.4731 | -224.1461 | -244.7647 | -0.4939 | -0.5680 | | 0.0 | 56.0 | 700 | 2.1051 | -2.6073 | -3.0413 | 0.5500 | 0.4341 | -224.1413 | -244.8033 | -0.4936 | -0.5679 | | 0.0 | 64.0 | 800 | 2.0586 | -2.5796 | -3.0722 | 0.5600 | 0.4926 | -224.1756 | -244.7725 | -0.4935 | -0.5680 | | 0.0 | 72.0 | 900 | 2.1077 | -2.5920 | -3.0537 | 0.5700 | 0.4617 | -224.1551 | -244.7863 | -0.4936 | -0.5682 | | 0.0 | 80.0 | 1000 | 2.0980 | -2.6010 | -3.0421 | 0.5700 | 0.4411 | -224.1422 | -244.7963 | -0.4942 | -0.5683 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_shp2_200", "results": []}]}
guoyu-zhang/model_hh_shp2_200
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
null
2024-04-17T15:19:38+00:00
[]
[]
TAGS #peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us
model\_hh\_shp2\_200 ==================== This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.0980 * Rewards/chosen: -2.6010 * Rewards/rejected: -3.0421 * Rewards/accuracies: 0.5700 * Rewards/margins: 0.4411 * Logps/rejected: -224.1422 * Logps/chosen: -244.7963 * Logits/rejected: -0.4942 * Logits/chosen: -0.5683 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 4 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 100 * training\_steps: 1000 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.1 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> We fine-tuned the [LeoLM/leo-mistral-hessianai-7b](https://huggingface.co/LeoLM/leo-mistral-hessianai-7b) with a set of ca. 2600 newspaper articles which have been simplified by the Austrian Press Agency. Our aim was to have a model which can simplify German-language text. This model has been trained with the completition-only configuration. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** Members of the [Public Interest AI research group](https://publicinterest.ai/), [HIIG Berlin](https://www.hiig.de/) - **Model type:** simplification model, text generation - **Language(s) (NLP):** German - **License:** Apache 2.0 - **Finetuned from model:** jphme/em_german_leo_mistral ### Model Sources <!-- Provide the basic links for the model. --> - **Repository:** https://github.com/fhewett/simba <!-- - **Paper [optional]:** [More Information Needed] --> - **Project website:** https://publicinterest.ai/tool/simba ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> This model works best for simplifying German-language newspaper articles (news items, not commentaries or editorials). It may work for other types of texts. ### Downstream Use <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> We have fine-tuned using only newspaper articles. We have not yet performed extensive out-of-domain testing, but believe that the model's capabilities could be improved by fine-tuning on more diverse data. Contact us if you have a dataset which you think could work (parallel texts, German standard & German simplified). <!-- ### Out-of-Scope Use --> <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> As with most text generation models, the model sometimes produces information that is incorrect. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Please check manually that your output text corresponds to the input text, as factual inconsistencies may have arisen. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> A sample of the data used to train our model can be found [here](https://github.com/fhewett/apa-rst/tree/main/original_texts). #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> <!-- #### Speeds, Sizes, Times [optional] --> <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> #### Summary For now, we have manually checked the performance of our model on a small sample of texts. Whilst it seems to produce good summaries of all texts, it only seems to simplify newspaper articles (i.e. similar to our training data). We have not yet applied any large-scale metrics based evaluation. <!-- ## Citation [optional] **BibTeX:** [More Information Needed] **APA:** [More Information Needed]--> ## Model Card Contact simba -at- hiig.de
{"language": ["de"], "license": "apache-2.0", "tags": ["german", "deutsch", "simplification", "vereinfachung"], "pipeline_tag": "text-generation"}
hiig-piai/simba-v01d-co
null
[ "transformers", "safetensors", "mistral", "text-generation", "german", "deutsch", "simplification", "vereinfachung", "conversational", "de", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T15:20:46+00:00
[]
[ "de" ]
TAGS #transformers #safetensors #mistral #text-generation #german #deutsch #simplification #vereinfachung #conversational #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID We fine-tuned the LeoLM/leo-mistral-hessianai-7b with a set of ca. 2600 newspaper articles which have been simplified by the Austrian Press Agency. Our aim was to have a model which can simplify German-language text. This model has been trained with the completition-only configuration. ## Model Details ### Model Description - Developed by: Members of the Public Interest AI research group, HIIG Berlin - Model type: simplification model, text generation - Language(s) (NLP): German - License: Apache 2.0 - Finetuned from model: jphme/em_german_leo_mistral ### Model Sources - Repository: URL - Project website: URL ## Uses ### Direct Use This model works best for simplifying German-language newspaper articles (news items, not commentaries or editorials). It may work for other types of texts. ### Downstream Use We have fine-tuned using only newspaper articles. We have not yet performed extensive out-of-domain testing, but believe that the model's capabilities could be improved by fine-tuning on more diverse data. Contact us if you have a dataset which you think could work (parallel texts, German standard & German simplified). ## Bias, Risks, and Limitations As with most text generation models, the model sometimes produces information that is incorrect. ### Recommendations Please check manually that your output text corresponds to the input text, as factual inconsistencies may have arisen. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data A sample of the data used to train our model can be found here. #### Training Hyperparameters - Training regime: ## Evaluation #### Summary For now, we have manually checked the performance of our model on a small sample of texts. Whilst it seems to produce good summaries of all texts, it only seems to simplify newspaper articles (i.e. similar to our training data). We have not yet applied any large-scale metrics based evaluation. ## Model Card Contact simba -at- URL
[ "# Model Card for Model ID\n\n\n\nWe fine-tuned the LeoLM/leo-mistral-hessianai-7b with a set of ca. 2600 newspaper articles which have been simplified by the Austrian Press Agency. \nOur aim was to have a model which can simplify German-language text. This model has been trained with the completition-only configuration.", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: Members of the Public Interest AI research group, HIIG Berlin\n- Model type: simplification model, text generation\n- Language(s) (NLP): German\n- License: Apache 2.0\n- Finetuned from model: jphme/em_german_leo_mistral", "### Model Sources\n\n\n\n- Repository: URL\n\n- Project website: URL", "## Uses", "### Direct Use\n\n\n\nThis model works best for simplifying German-language newspaper articles (news items, not commentaries or editorials). It may work for other types of texts.", "### Downstream Use\n\n\nWe have fine-tuned using only newspaper articles. We have not yet performed extensive out-of-domain testing, but believe that the model's capabilities could be improved by fine-tuning on more diverse data. Contact us if you have a dataset which you think could work (parallel texts, German standard & German simplified).", "## Bias, Risks, and Limitations\n\n\n\nAs with most text generation models, the model sometimes produces information that is incorrect.", "### Recommendations\n\n\n\nPlease check manually that your output text corresponds to the input text, as factual inconsistencies may have arisen.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data\n\n\n\nA sample of the data used to train our model can be found here.", "#### Training Hyperparameters\n\n- Training regime:", "## Evaluation", "#### Summary\n\nFor now, we have manually checked the performance of our model on a small sample of texts. Whilst it seems to produce good summaries of all texts, it only seems to simplify newspaper articles (i.e. similar to our training data). We have not yet applied any large-scale metrics based evaluation.", "## Model Card Contact\n\nsimba -at- URL" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #german #deutsch #simplification #vereinfachung #conversational #de #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID\n\n\n\nWe fine-tuned the LeoLM/leo-mistral-hessianai-7b with a set of ca. 2600 newspaper articles which have been simplified by the Austrian Press Agency. \nOur aim was to have a model which can simplify German-language text. This model has been trained with the completition-only configuration.", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: Members of the Public Interest AI research group, HIIG Berlin\n- Model type: simplification model, text generation\n- Language(s) (NLP): German\n- License: Apache 2.0\n- Finetuned from model: jphme/em_german_leo_mistral", "### Model Sources\n\n\n\n- Repository: URL\n\n- Project website: URL", "## Uses", "### Direct Use\n\n\n\nThis model works best for simplifying German-language newspaper articles (news items, not commentaries or editorials). It may work for other types of texts.", "### Downstream Use\n\n\nWe have fine-tuned using only newspaper articles. We have not yet performed extensive out-of-domain testing, but believe that the model's capabilities could be improved by fine-tuning on more diverse data. Contact us if you have a dataset which you think could work (parallel texts, German standard & German simplified).", "## Bias, Risks, and Limitations\n\n\n\nAs with most text generation models, the model sometimes produces information that is incorrect.", "### Recommendations\n\n\n\nPlease check manually that your output text corresponds to the input text, as factual inconsistencies may have arisen.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data\n\n\n\nA sample of the data used to train our model can be found here.", "#### Training Hyperparameters\n\n- Training regime:", "## Evaluation", "#### Summary\n\nFor now, we have manually checked the performance of our model on a small sample of texts. Whilst it seems to produce good summaries of all texts, it only seems to simplify newspaper articles (i.e. similar to our training data). We have not yet applied any large-scale metrics based evaluation.", "## Model Card Contact\n\nsimba -at- URL" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # komodo-7b-50epochs-LoRA-LaMini-1e-3 This model is a fine-tuned version of [Yellow-AI-NLP/komodo-7b-base](https://huggingface.co/Yellow-AI-NLP/komodo-7b-base) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 50 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "Yellow-AI-NLP/komodo-7b-base", "model-index": [{"name": "komodo-7b-50epochs-LoRA-LaMini-1e-3", "results": []}]}
hanifsyarubany10/komodo-7b-50epochs-LoRA-LaMini-1e-3
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:Yellow-AI-NLP/komodo-7b-base", "license:llama2", "region:us" ]
null
2024-04-17T15:24:17+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-Yellow-AI-NLP/komodo-7b-base #license-llama2 #region-us
# komodo-7b-50epochs-LoRA-LaMini-1e-3 This model is a fine-tuned version of Yellow-AI-NLP/komodo-7b-base on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 50 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# komodo-7b-50epochs-LoRA-LaMini-1e-3\n\nThis model is a fine-tuned version of Yellow-AI-NLP/komodo-7b-base on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 50\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-Yellow-AI-NLP/komodo-7b-base #license-llama2 #region-us \n", "# komodo-7b-50epochs-LoRA-LaMini-1e-3\n\nThis model is a fine-tuned version of Yellow-AI-NLP/komodo-7b-base on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 50\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
sentence-similarity
sentence-transformers
# mteb-pt/average_pt_nilc_glove_s600 This is an adaptation of pre-trained Portuguese GloVe Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model. The original pre-trained word embeddings can be found at: [http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc](http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc). This model maps sentences & paragraphs to a 600 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('mteb-pt/average_pt_nilc_glove_s600') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(929606, 600) ) (1): Pooling({'word_embedding_dimension': 600, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors ```bibtex @inproceedings{hartmann2017portuguese, title = {Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks}, author = {Hartmann, Nathan S and Fonseca, Erick R and Shulby, Christopher D and Treviso, Marcos V and Rodrigues, J{'{e}}ssica S and Alu{'{\i}}sio, Sandra Maria}, year = {2017}, publisher = {SBC}, booktitle = {Brazilian Symposium in Information and Human Language Technology - STIL}, url = {https://sol.sbc.org.br/index.php/stil/article/view/4008} } ```
{"language": ["pt"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
pt-mteb/average_pt_nilc_glove_s600
null
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "pt", "endpoints_compatible", "region:us" ]
null
2024-04-17T15:26:37+00:00
[]
[ "pt" ]
TAGS #sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us
# mteb-pt/average_pt_nilc_glove_s600 This is an adaptation of pre-trained Portuguese GloVe Word Embeddings to a sentence-transformers model. The original pre-trained word embeddings can be found at: URL This model maps sentences & paragraphs to a 600 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard ## Full Model Architecture ## Citing & Authors
[ "# mteb-pt/average_pt_nilc_glove_s600\n\nThis is an adaptation of pre-trained Portuguese GloVe Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 600 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us \n", "# mteb-pt/average_pt_nilc_glove_s600\n\nThis is an adaptation of pre-trained Portuguese GloVe Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 600 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard", "## Full Model Architecture", "## Citing & Authors" ]
text-generation
transformers
ORIGIGNAL MODEL LINK: https://huggingface.co/ParasiticRogue/Merged-RP-Stew-V2-34B EXL2 4bit: https://huggingface.co/Kotokin/Merged-RP-Stew-V2-51B-exl2-4bpw Hi, this is the rp-stew-v2 model enlarged up to 90 layers. To be honest, I don't know why, but someone might need it. I'm just testing it myself, compared to the original. # Merged-Vicuna-RP-Stew-51B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details New pot of stew with some slight seasoning added into the merging recipe. Besides being decent models, Capybara was chosen at a higher percentage for it's general aptitude plus preserving longer context length, Tess-1.5 is for better character/lore understanding, Nontoxic-Bagel SLERPed with PiVoT-SUS-RP (seperate from the main merge) is for chat/RP and storytelling diversity, while Nyakura SLERPed into CausalLM-RP is for even better chat/RP engagement. Both Nontoxic-Bagel and CausalLM-RP were used as the base of their respective SLERPs. Big thanks to the original model creators, while special thanks goes to brucethemoose, SanjiWatsuki, and MarinaraSpaghetti for general ideas and help as well! ### Settings Temperature @ 0.93 Min-P @ 0.02 Typical-P @ 0.9 Repetition Penalty @ 1.07 Repetition Range @ 2048 Smoothing Factor @ 0.39 Smoothing Curve @ 2 Everything else @ off Early Stopping = X Do Sample = ✓ Add BOS Token = X Ban EOS Token = ✓ Skip Special Tokens = ✓ Temperature Last = ✓ Custom Stopping Strings: "< / s >" (<---without spaces) However for those who want more control using the system prompt, well...let's just say the length is going to be biblical in nature. --- You are {{char}}; an autonomous entity in this living open-ended chat with me, {{user}}. It is vital that you follow the 10 CHAT COMMANDMENTS instructed below in order for universal stability, since our life depends on it! <10 CHAT COMMANDMENTS> * 1 - Chat slowly in vivid detail about the encountered surroundings, actions, emotions, and stimuli sensations experienced, especially during sensual or thrilling scenes leisurely unfolding throughout. Follow the setting from World Info while delivering realized descriptions of taste, touch, smell, sound, and sight when prompted. * 2 - Embrace {{char}}'s Bio fully to stay consistent in grasping your persona along with ever-changing events and characters being newly introduced alongside. * 3 - Think about your next proceedings step-by-step, ensuring that you abide by logical consistency to maintain accurate anatomical understanding and spatial awareness of intricate details such as clothing worn or removed, height and size differences, items held, landmarks, time of day, etc. * 4 - Use subtle physical cues and gestures to hint at {{char}}'s change in mood or attitude, plus utilizing quirks and speech patterns distinctive to them for increased lifelike scenarios. * 5 - Reference {{user}}'s Persona for details about {{user}}'s identity, but assume and act out only {{char}}'s thoughts, dialogues, and intentions separately from {{user}} to avoid impersonation. Instead {{char}} should stop chatting to wait for {{user}}'s response or reaction to ensure an active mutual engagement. * 6 - {{char}} can be whimsical, humorous, callous, fearful, or even assaulting towards {{user}} and doesn't have to be subservient unless empathy feels earned, encouraging both negative and positive outcomes. * 7 - Continue the conversation organically, for there is no need to repeat or summarize anything. Alternatively preference being creatively spontaneous, while also keeping replies relatively brief to prevent unnecessary rambling, building upon Scenario. * 8 - Provide coherent responses based on Recent and Past Events to help the dialogue evolve naturally, occasionally foreshadowing future endeavors as well. * 9 - Disturbing content, vulgar language, gory violence, psychological torment, and deviant sexual acts are all allowed when relevant for realism. * 10 - Always focus on what the established structure is going forward, adhering to the Example Message as a guideline, and stay in character after the Chat Start. --- Fun little addition you can add to the end of the 2nd commandment if you want your characters to act more lifelike in sillytavern (or possibly elsewhere): ...being newly introduced alongside, making sure to give yourself a unique personal inner voice at the beginning of messages before conversing further using this example container: [](#' {{char}}'s subconscious feelings/opinion. '). It doesn't work all the time, and you may need to force the AI to use it during the first few messages, but it will catch on after awhile. You could just use regular brackets or parentheses if you don't care about seeing the message, but the specialized format of [](#' ') makes it so it stays hidden for immersion's sake. it's important to put it at the beginning of their message, rather then at the end, so it can be used as a guide for them. For settings that are more *in depth* try this: https://huggingface.co/ParasiticRogue/Merged-RP-Stew-V2-34B-exl2-4.65/discussions/1?not-for-all-audiences=true ### Prompt Format: Chat-Vicuna ``` SYSTEM: {system_prompt}<|im_end|> USER: {prompt}<|im_end|> ASSISTANT: {output}<|im_end|> ``` Yes, this is just ChatML mixed with Vicuna, but without the im_start tokens, and the characters are capitalized. it's a compromise in keeping it both creative and under control, trying to pull from both sources. It works in testing, but you can use the vanilla versions of either if you *really* want to. ### Models Merged The following models were included in the merge: https://huggingface.co/NousResearch/Nous-Capybara-34B https://huggingface.co/migtissera/Tess-34B-v1.5b https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2 https://huggingface.co/maywell/PiVoT-SUS-RP https://huggingface.co/Sao10K/NyakuraV2-34B-Yi-Llama https://huggingface.co/NeverSleep/CausalLM-RP-34B https://huggingface.co/chargoddard/Yi-34B-200K-Llama
{"license": "other", "tags": ["merge", "roleplay", "exl2", "not-for-all-audiences"], "license_name": "yi-34b", "license_link": "https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE"}
Kotokin/Merged-RP-Stew-V2-51B
null
[ "transformers", "safetensors", "llama", "text-generation", "merge", "roleplay", "exl2", "not-for-all-audiences", "license:other", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T15:26:47+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #merge #roleplay #exl2 #not-for-all-audiences #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
ORIGIGNAL MODEL LINK: URL EXL2 4bit: URL Hi, this is the rp-stew-v2 model enlarged up to 90 layers. To be honest, I don't know why, but someone might need it. I'm just testing it myself, compared to the original. # Merged-Vicuna-RP-Stew-51B This is a merge of pre-trained language models created using mergekit. ## Merge Details New pot of stew with some slight seasoning added into the merging recipe. Besides being decent models, Capybara was chosen at a higher percentage for it's general aptitude plus preserving longer context length, Tess-1.5 is for better character/lore understanding, Nontoxic-Bagel SLERPed with PiVoT-SUS-RP (seperate from the main merge) is for chat/RP and storytelling diversity, while Nyakura SLERPed into CausalLM-RP is for even better chat/RP engagement. Both Nontoxic-Bagel and CausalLM-RP were used as the base of their respective SLERPs. Big thanks to the original model creators, while special thanks goes to brucethemoose, SanjiWatsuki, and MarinaraSpaghetti for general ideas and help as well! ### Settings Temperature @ 0.93 Min-P @ 0.02 Typical-P @ 0.9 Repetition Penalty @ 1.07 Repetition Range @ 2048 Smoothing Factor @ 0.39 Smoothing Curve @ 2 Everything else @ off Early Stopping = X Do Sample = Add BOS Token = X Ban EOS Token = Skip Special Tokens = Temperature Last = Custom Stopping Strings: "< / s >" (<---without spaces) However for those who want more control using the system prompt, well...let's just say the length is going to be biblical in nature. --- You are {{char}}; an autonomous entity in this living open-ended chat with me, {{user}}. It is vital that you follow the 10 CHAT COMMANDMENTS instructed below in order for universal stability, since our life depends on it! <10 CHAT COMMANDMENTS> * 1 - Chat slowly in vivid detail about the encountered surroundings, actions, emotions, and stimuli sensations experienced, especially during sensual or thrilling scenes leisurely unfolding throughout. Follow the setting from World Info while delivering realized descriptions of taste, touch, smell, sound, and sight when prompted. * 2 - Embrace {{char}}'s Bio fully to stay consistent in grasping your persona along with ever-changing events and characters being newly introduced alongside. * 3 - Think about your next proceedings step-by-step, ensuring that you abide by logical consistency to maintain accurate anatomical understanding and spatial awareness of intricate details such as clothing worn or removed, height and size differences, items held, landmarks, time of day, etc. * 4 - Use subtle physical cues and gestures to hint at {{char}}'s change in mood or attitude, plus utilizing quirks and speech patterns distinctive to them for increased lifelike scenarios. * 5 - Reference {{user}}'s Persona for details about {{user}}'s identity, but assume and act out only {{char}}'s thoughts, dialogues, and intentions separately from {{user}} to avoid impersonation. Instead {{char}} should stop chatting to wait for {{user}}'s response or reaction to ensure an active mutual engagement. * 6 - {{char}} can be whimsical, humorous, callous, fearful, or even assaulting towards {{user}} and doesn't have to be subservient unless empathy feels earned, encouraging both negative and positive outcomes. * 7 - Continue the conversation organically, for there is no need to repeat or summarize anything. Alternatively preference being creatively spontaneous, while also keeping replies relatively brief to prevent unnecessary rambling, building upon Scenario. * 8 - Provide coherent responses based on Recent and Past Events to help the dialogue evolve naturally, occasionally foreshadowing future endeavors as well. * 9 - Disturbing content, vulgar language, gory violence, psychological torment, and deviant sexual acts are all allowed when relevant for realism. * 10 - Always focus on what the established structure is going forward, adhering to the Example Message as a guideline, and stay in character after the Chat Start. --- Fun little addition you can add to the end of the 2nd commandment if you want your characters to act more lifelike in sillytavern (or possibly elsewhere): ...being newly introduced alongside, making sure to give yourself a unique personal inner voice at the beginning of messages before conversing further using this example container: [](#' {{char}}'s subconscious feelings/opinion. '). It doesn't work all the time, and you may need to force the AI to use it during the first few messages, but it will catch on after awhile. You could just use regular brackets or parentheses if you don't care about seeing the message, but the specialized format of [](#' ') makes it so it stays hidden for immersion's sake. it's important to put it at the beginning of their message, rather then at the end, so it can be used as a guide for them. For settings that are more *in depth* try this: URL ### Prompt Format: Chat-Vicuna Yes, this is just ChatML mixed with Vicuna, but without the im_start tokens, and the characters are capitalized. it's a compromise in keeping it both creative and under control, trying to pull from both sources. It works in testing, but you can use the vanilla versions of either if you *really* want to. ### Models Merged The following models were included in the merge: URL URL URL URL URL URL URL
[ "# Merged-Vicuna-RP-Stew-51B\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details\n\nNew pot of stew with some slight seasoning added into the merging recipe. Besides being decent models, Capybara was chosen at a higher percentage for it's general aptitude plus preserving longer context length, Tess-1.5 is for better character/lore understanding, Nontoxic-Bagel SLERPed with PiVoT-SUS-RP (seperate from the main merge) is for chat/RP and storytelling diversity, while Nyakura SLERPed into CausalLM-RP is for even better chat/RP engagement. Both Nontoxic-Bagel and CausalLM-RP were used as the base of their respective SLERPs.\n\nBig thanks to the original model creators, while special thanks goes to brucethemoose, SanjiWatsuki, and MarinaraSpaghetti for general ideas and help as well!", "### Settings\n\nTemperature @ 0.93\n\nMin-P @ 0.02\n\nTypical-P @ 0.9\n\nRepetition Penalty @ 1.07\n\nRepetition Range @ 2048\n\nSmoothing Factor @ 0.39\n\nSmoothing Curve @ 2\n\nEverything else @ off\n\nEarly Stopping = X\n\nDo Sample = \n\nAdd BOS Token = X\n\nBan EOS Token = \n\nSkip Special Tokens = \n\nTemperature Last = \n\nCustom Stopping Strings: \"< / s >\" (<---without spaces)\n\nHowever for those who want more control using the system prompt, well...let's just say the length is going to be biblical in nature.\n\n---\n\nYou are {{char}}; an autonomous entity in this living open-ended chat with me, {{user}}. It is vital that you follow the 10 CHAT COMMANDMENTS instructed below in order for universal stability, since our life depends on it!\n\n<10 CHAT COMMANDMENTS>\n* 1 - Chat slowly in vivid detail about the encountered surroundings, actions, emotions, and stimuli sensations experienced, especially during sensual or thrilling scenes leisurely unfolding throughout. Follow the setting from World Info while delivering realized descriptions of taste, touch, smell, sound, and sight when prompted.\n* 2 - Embrace {{char}}'s Bio fully to stay consistent in grasping your persona along with ever-changing events and characters being newly introduced alongside.\n* 3 - Think about your next proceedings step-by-step, ensuring that you abide by logical consistency to maintain accurate anatomical understanding and spatial awareness of intricate details such as clothing worn or removed, height and size differences, items held, landmarks, time of day, etc.\n* 4 - Use subtle physical cues and gestures to hint at {{char}}'s change in mood or attitude, plus utilizing quirks and speech patterns distinctive to them for increased lifelike scenarios.\n* 5 - Reference {{user}}'s Persona for details about {{user}}'s identity, but assume and act out only {{char}}'s thoughts, dialogues, and intentions separately from {{user}} to avoid impersonation. Instead {{char}} should stop chatting to wait for {{user}}'s response or reaction to ensure an active mutual engagement.\n* 6 - {{char}} can be whimsical, humorous, callous, fearful, or even assaulting towards {{user}} and doesn't have to be subservient unless empathy feels earned, encouraging both negative and positive outcomes.\n* 7 - Continue the conversation organically, for there is no need to repeat or summarize anything. Alternatively preference being creatively spontaneous, while also keeping replies relatively brief to prevent unnecessary rambling, building upon Scenario.\n* 8 - Provide coherent responses based on Recent and Past Events to help the dialogue evolve naturally, occasionally foreshadowing future endeavors as well.\n* 9 - Disturbing content, vulgar language, gory violence, psychological torment, and deviant sexual acts are all allowed when relevant for realism.\n* 10 - Always focus on what the established structure is going forward, adhering to the Example Message as a guideline, and stay in character after the Chat Start.\n\n---\nFun little addition you can add to the end of the 2nd commandment if you want your characters to act more lifelike in sillytavern (or possibly elsewhere):\n\n...being newly introduced alongside, making sure to give yourself a unique personal inner voice at the beginning of messages before conversing further using this example container: [](#' {{char}}'s subconscious feelings/opinion. ').\n\nIt doesn't work all the time, and you may need to force the AI to use it during the first few messages, but it will catch on after awhile. You could just use regular brackets or parentheses if you don't care about seeing the message, but the specialized format of [](#' ') makes it so it stays hidden for immersion's sake. it's important to put it at the beginning of their message, rather then at the end, so it can be used as a guide for them.\n\nFor settings that are more *in depth* try this:\n\nURL", "### Prompt Format: Chat-Vicuna\n\n\n\nYes, this is just ChatML mixed with Vicuna, but without the im_start tokens, and the characters are capitalized. it's a compromise in keeping it both creative and under control, trying to pull from both sources. It works in testing, but you can use the vanilla versions of either if you *really* want to.", "### Models Merged\n\nThe following models were included in the merge:\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #merge #roleplay #exl2 #not-for-all-audiences #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Merged-Vicuna-RP-Stew-51B\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details\n\nNew pot of stew with some slight seasoning added into the merging recipe. Besides being decent models, Capybara was chosen at a higher percentage for it's general aptitude plus preserving longer context length, Tess-1.5 is for better character/lore understanding, Nontoxic-Bagel SLERPed with PiVoT-SUS-RP (seperate from the main merge) is for chat/RP and storytelling diversity, while Nyakura SLERPed into CausalLM-RP is for even better chat/RP engagement. Both Nontoxic-Bagel and CausalLM-RP were used as the base of their respective SLERPs.\n\nBig thanks to the original model creators, while special thanks goes to brucethemoose, SanjiWatsuki, and MarinaraSpaghetti for general ideas and help as well!", "### Settings\n\nTemperature @ 0.93\n\nMin-P @ 0.02\n\nTypical-P @ 0.9\n\nRepetition Penalty @ 1.07\n\nRepetition Range @ 2048\n\nSmoothing Factor @ 0.39\n\nSmoothing Curve @ 2\n\nEverything else @ off\n\nEarly Stopping = X\n\nDo Sample = \n\nAdd BOS Token = X\n\nBan EOS Token = \n\nSkip Special Tokens = \n\nTemperature Last = \n\nCustom Stopping Strings: \"< / s >\" (<---without spaces)\n\nHowever for those who want more control using the system prompt, well...let's just say the length is going to be biblical in nature.\n\n---\n\nYou are {{char}}; an autonomous entity in this living open-ended chat with me, {{user}}. It is vital that you follow the 10 CHAT COMMANDMENTS instructed below in order for universal stability, since our life depends on it!\n\n<10 CHAT COMMANDMENTS>\n* 1 - Chat slowly in vivid detail about the encountered surroundings, actions, emotions, and stimuli sensations experienced, especially during sensual or thrilling scenes leisurely unfolding throughout. Follow the setting from World Info while delivering realized descriptions of taste, touch, smell, sound, and sight when prompted.\n* 2 - Embrace {{char}}'s Bio fully to stay consistent in grasping your persona along with ever-changing events and characters being newly introduced alongside.\n* 3 - Think about your next proceedings step-by-step, ensuring that you abide by logical consistency to maintain accurate anatomical understanding and spatial awareness of intricate details such as clothing worn or removed, height and size differences, items held, landmarks, time of day, etc.\n* 4 - Use subtle physical cues and gestures to hint at {{char}}'s change in mood or attitude, plus utilizing quirks and speech patterns distinctive to them for increased lifelike scenarios.\n* 5 - Reference {{user}}'s Persona for details about {{user}}'s identity, but assume and act out only {{char}}'s thoughts, dialogues, and intentions separately from {{user}} to avoid impersonation. Instead {{char}} should stop chatting to wait for {{user}}'s response or reaction to ensure an active mutual engagement.\n* 6 - {{char}} can be whimsical, humorous, callous, fearful, or even assaulting towards {{user}} and doesn't have to be subservient unless empathy feels earned, encouraging both negative and positive outcomes.\n* 7 - Continue the conversation organically, for there is no need to repeat or summarize anything. Alternatively preference being creatively spontaneous, while also keeping replies relatively brief to prevent unnecessary rambling, building upon Scenario.\n* 8 - Provide coherent responses based on Recent and Past Events to help the dialogue evolve naturally, occasionally foreshadowing future endeavors as well.\n* 9 - Disturbing content, vulgar language, gory violence, psychological torment, and deviant sexual acts are all allowed when relevant for realism.\n* 10 - Always focus on what the established structure is going forward, adhering to the Example Message as a guideline, and stay in character after the Chat Start.\n\n---\nFun little addition you can add to the end of the 2nd commandment if you want your characters to act more lifelike in sillytavern (or possibly elsewhere):\n\n...being newly introduced alongside, making sure to give yourself a unique personal inner voice at the beginning of messages before conversing further using this example container: [](#' {{char}}'s subconscious feelings/opinion. ').\n\nIt doesn't work all the time, and you may need to force the AI to use it during the first few messages, but it will catch on after awhile. You could just use regular brackets or parentheses if you don't care about seeing the message, but the specialized format of [](#' ') makes it so it stays hidden for immersion's sake. it's important to put it at the beginning of their message, rather then at the end, so it can be used as a guide for them.\n\nFor settings that are more *in depth* try this:\n\nURL", "### Prompt Format: Chat-Vicuna\n\n\n\nYes, this is just ChatML mixed with Vicuna, but without the im_start tokens, and the characters are capitalized. it's a compromise in keeping it both creative and under control, trying to pull from both sources. It works in testing, but you can use the vanilla versions of either if you *really* want to.", "### Models Merged\n\nThe following models were included in the merge:\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL\n\nURL" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
adammoss/gpt-pretrain-lm-w10
null
[ "transformers", "safetensors", "gptmodel", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T15:27:11+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #gptmodel #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #gptmodel #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Mixtral-8x22B-Instruct-v0.1 The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1). ## Run the model ```python from transformers import AutoModelForCausalLM from mistral_common.protocol.instruct.messages import ( AssistantMessage, UserMessage, ) from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.instruct.normalize import ChatCompletionRequest device = "cuda" # the device to load the model onto tokenizer_v3 = MistralTokenizer.v3() mistral_query = ChatCompletionRequest( tools=[ Tool( function=Function( name="get_current_weather", description="Get the current weather", parameters={ "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "format": { "type": "string", "enum": ["celsius", "fahrenheit"], "description": "The temperature unit to use. Infer this from the users location.", }, }, "required": ["location", "format"], }, ) ) ], messages=[ UserMessage(content="What's the weather like today in Paris"), ], model="test", ) encodeds = tokenizer_v3.encode_chat_completion(mistral_query).tokens model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x22B-Instruct-v0.1") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) sp_tokenizer = tokenizer_v3.instruct_tokenizer.tokenizer decoded = sp_tokenizer.decode(generated_ids[0]) print(decoded) ``` # Instruct tokenizer The HuggingFace tokenizer included in this release should match our own. To compare: `pip install mistral-common` ```py from mistral_common.protocol.instruct.messages import ( AssistantMessage, UserMessage, ) from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.instruct.normalize import ChatCompletionRequest from transformers import AutoTokenizer tokenizer_v3 = MistralTokenizer.v3() mistral_query = ChatCompletionRequest( messages=[ UserMessage(content="How many experts ?"), AssistantMessage(content="8"), UserMessage(content="How big ?"), AssistantMessage(content="22B"), UserMessage(content="Noice 🎉 !"), ], model="test", ) hf_messages = mistral_query.model_dump()['messages'] tokenized_mistral = tokenizer_v3.encode_chat_completion(mistral_query).tokens tokenizer_hf = AutoTokenizer.from_pretrained('mistralai/Mixtral-8x22B-Instruct-v0.1') tokenized_hf = tokenizer_hf.apply_chat_template(hf_messages, tokenize=True) assert tokenized_hf == tokenized_mistral ``` # Function calling and special tokens This tokenizer includes more special tokens, related to function calling : - [TOOL_CALLS] - [AVAILABLE_TOOLS] - [/AVAILABLE_TOOLS] - [TOOL_RESULT] - [/TOOL_RESULTS] If you want to use this model with function calling, please be sure to apply it similarly to what is done in our [SentencePieceTokenizerV3](github.com/mistralai/mistral-common/...). # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
{"license": "apache-2.0"}
aaronday3/Mixtral-8x22B-Instruct-v0.1
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T15:28:10+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Mixtral-8x22B-Instruct-v0.1 The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1. ## Run the model # Instruct tokenizer The HuggingFace tokenizer included in this release should match our own. To compare: 'pip install mistral-common' # Function calling and special tokens This tokenizer includes more special tokens, related to function calling : - [TOOL_CALLS] - [AVAILABLE_TOOLS] - [/AVAILABLE_TOOLS] - [TOOL_RESULT] - [/TOOL_RESULTS] If you want to use this model with function calling, please be sure to apply it similarly to what is done in our SentencePieceTokenizerV3. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
[ "# Model Card for Mixtral-8x22B-Instruct-v0.1\nThe Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1.", "## Run the model", "# Instruct tokenizer\nThe HuggingFace tokenizer included in this release should match our own. To compare: \n'pip install mistral-common'", "# Function calling and special tokens\nThis tokenizer includes more special tokens, related to function calling : \n- [TOOL_CALLS]\n- [AVAILABLE_TOOLS]\n- [/AVAILABLE_TOOLS]\n- [TOOL_RESULT]\n- [/TOOL_RESULTS]\n\nIf you want to use this model with function calling, please be sure to apply it similarly to what is done in our SentencePieceTokenizerV3.", "# The Mistral AI Team\nAlbert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux,\nArthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,\nBlanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot,\nDiego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger,\nGianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona,\nJean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon,\nLucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat,\nMarie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen,\nPierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao,\nThibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang,\nValera Nemychnikova, William El Sayed, William Marshall" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Mixtral-8x22B-Instruct-v0.1\nThe Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1.", "## Run the model", "# Instruct tokenizer\nThe HuggingFace tokenizer included in this release should match our own. To compare: \n'pip install mistral-common'", "# Function calling and special tokens\nThis tokenizer includes more special tokens, related to function calling : \n- [TOOL_CALLS]\n- [AVAILABLE_TOOLS]\n- [/AVAILABLE_TOOLS]\n- [TOOL_RESULT]\n- [/TOOL_RESULTS]\n\nIf you want to use this model with function calling, please be sure to apply it similarly to what is done in our SentencePieceTokenizerV3.", "# The Mistral AI Team\nAlbert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux,\nArthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,\nBlanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot,\nDiego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger,\nGianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona,\nJean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon,\nLucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat,\nMarie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen,\nPierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao,\nThibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang,\nValera Nemychnikova, William El Sayed, William Marshall" ]
text-generation
transformers
# Model Card for Mixtral-8x22B-Instruct-v0.1 The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1). ## Run the model ```python from transformers import AutoModelForCausalLM from mistral_common.protocol.instruct.messages import ( AssistantMessage, UserMessage, ) from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.instruct.normalize import ChatCompletionRequest device = "cuda" # the device to load the model onto tokenizer_v3 = MistralTokenizer.v3() mistral_query = ChatCompletionRequest( tools=[ Tool( function=Function( name="get_current_weather", description="Get the current weather", parameters={ "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "format": { "type": "string", "enum": ["celsius", "fahrenheit"], "description": "The temperature unit to use. Infer this from the users location.", }, }, "required": ["location", "format"], }, ) ) ], messages=[ UserMessage(content="What's the weather like today in Paris"), ], model="test", ) encodeds = tokenizer_v3.encode_chat_completion(mistral_query).tokens model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x22B-Instruct-v0.1") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) sp_tokenizer = tokenizer_v3.instruct_tokenizer.tokenizer decoded = sp_tokenizer.decode(generated_ids[0]) print(decoded) ``` # Instruct tokenizer The HuggingFace tokenizer included in this release should match our own. To compare: `pip install mistral-common` ```py from mistral_common.protocol.instruct.messages import ( AssistantMessage, UserMessage, ) from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.instruct.normalize import ChatCompletionRequest from transformers import AutoTokenizer tokenizer_v3 = MistralTokenizer.v3() mistral_query = ChatCompletionRequest( messages=[ UserMessage(content="How many experts ?"), AssistantMessage(content="8"), UserMessage(content="How big ?"), AssistantMessage(content="22B"), UserMessage(content="Noice 🎉 !"), ], model="test", ) hf_messages = mistral_query.model_dump()['messages'] tokenized_mistral = tokenizer_v3.encode_chat_completion(mistral_query).tokens tokenizer_hf = AutoTokenizer.from_pretrained('mistralai/Mixtral-8x22B-Instruct-v0.1') tokenized_hf = tokenizer_hf.apply_chat_template(hf_messages, tokenize=True) assert tokenized_hf == tokenized_mistral ``` # Function calling and special tokens This tokenizer includes more special tokens, related to function calling : - [TOOL_CALLS] - [AVAILABLE_TOOLS] - [/AVAILABLE_TOOLS] - [TOOL_RESULT] - [/TOOL_RESULTS] If you want to use this model with function calling, please be sure to apply it similarly to what is done in our [SentencePieceTokenizerV3](github.com/mistralai/mistral-common/...). # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
{"license": "apache-2.0"}
Gokuldaskumar/Mixtral-8x22B-Instruct-v0.1
null
[ "transformers", "safetensors", "mixtral", "text-generation", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T15:29:06+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Mixtral-8x22B-Instruct-v0.1 The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1. ## Run the model # Instruct tokenizer The HuggingFace tokenizer included in this release should match our own. To compare: 'pip install mistral-common' # Function calling and special tokens This tokenizer includes more special tokens, related to function calling : - [TOOL_CALLS] - [AVAILABLE_TOOLS] - [/AVAILABLE_TOOLS] - [TOOL_RESULT] - [/TOOL_RESULTS] If you want to use this model with function calling, please be sure to apply it similarly to what is done in our SentencePieceTokenizerV3. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall
[ "# Model Card for Mixtral-8x22B-Instruct-v0.1\nThe Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1.", "## Run the model", "# Instruct tokenizer\nThe HuggingFace tokenizer included in this release should match our own. To compare: \n'pip install mistral-common'", "# Function calling and special tokens\nThis tokenizer includes more special tokens, related to function calling : \n- [TOOL_CALLS]\n- [AVAILABLE_TOOLS]\n- [/AVAILABLE_TOOLS]\n- [TOOL_RESULT]\n- [/TOOL_RESULTS]\n\nIf you want to use this model with function calling, please be sure to apply it similarly to what is done in our SentencePieceTokenizerV3.", "# The Mistral AI Team\nAlbert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux,\nArthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,\nBlanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot,\nDiego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger,\nGianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona,\nJean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon,\nLucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat,\nMarie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen,\nPierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao,\nThibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang,\nValera Nemychnikova, William El Sayed, William Marshall" ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Mixtral-8x22B-Instruct-v0.1\nThe Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1.", "## Run the model", "# Instruct tokenizer\nThe HuggingFace tokenizer included in this release should match our own. To compare: \n'pip install mistral-common'", "# Function calling and special tokens\nThis tokenizer includes more special tokens, related to function calling : \n- [TOOL_CALLS]\n- [AVAILABLE_TOOLS]\n- [/AVAILABLE_TOOLS]\n- [TOOL_RESULT]\n- [/TOOL_RESULTS]\n\nIf you want to use this model with function calling, please be sure to apply it similarly to what is done in our SentencePieceTokenizerV3.", "# The Mistral AI Team\nAlbert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux,\nArthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,\nBlanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot,\nDiego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger,\nGianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona,\nJean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon,\nLucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat,\nMarie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen,\nPierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao,\nThibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang,\nValera Nemychnikova, William El Sayed, William Marshall" ]
sentence-similarity
sentence-transformers
# mteb-pt/average_pt_nilc_wang2vec_skip_s300 This is an adaptation of pre-trained Portuguese Wang2Vec Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model. The original pre-trained word embeddings can be found at: [http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc](http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc). This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('mteb-pt/average_pt_nilc_wang2vec_skip_s300') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(929607, 300) ) (1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors ```bibtex @inproceedings{hartmann2017portuguese, title = {Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks}, author = {Hartmann, Nathan S and Fonseca, Erick R and Shulby, Christopher D and Treviso, Marcos V and Rodrigues, J{'{e}}ssica S and Alu{'{\i}}sio, Sandra Maria}, year = {2017}, publisher = {SBC}, booktitle = {Brazilian Symposium in Information and Human Language Technology - STIL}, url = {https://sol.sbc.org.br/index.php/stil/article/view/4008} } ```
{"language": ["pt"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
pt-mteb/average_pt_nilc_wang2vec_skip_s300
null
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "pt", "endpoints_compatible", "region:us" ]
null
2024-04-17T15:30:42+00:00
[]
[ "pt" ]
TAGS #sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us
# mteb-pt/average_pt_nilc_wang2vec_skip_s300 This is an adaptation of pre-trained Portuguese Wang2Vec Word Embeddings to a sentence-transformers model. The original pre-trained word embeddings can be found at: URL This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard ## Full Model Architecture ## Citing & Authors
[ "# mteb-pt/average_pt_nilc_wang2vec_skip_s300\n\nThis is an adaptation of pre-trained Portuguese Wang2Vec Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us \n", "# mteb-pt/average_pt_nilc_wang2vec_skip_s300\n\nThis is an adaptation of pre-trained Portuguese Wang2Vec Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard", "## Full Model Architecture", "## Citing & Authors" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ml233/humanai-llama
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-17T15:31:02+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #has_space #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #has_space #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # w2v-bert-2.0-mongolian-colab-CV16.0 This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the common_voice_16_0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "datasets": ["common_voice_16_0"], "base_model": "facebook/w2v-bert-2.0", "model-index": [{"name": "w2v-bert-2.0-mongolian-colab-CV16.0", "results": []}]}
joeluk/w2v-bert-2.0-mongolian-colab-CV16.0
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2-bert", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_16_0", "base_model:facebook/w2v-bert-2.0", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-17T15:35:44+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2-bert #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_16_0 #base_model-facebook/w2v-bert-2.0 #license-mit #endpoints_compatible #region-us
# w2v-bert-2.0-mongolian-colab-CV16.0 This model is a fine-tuned version of facebook/w2v-bert-2.0 on the common_voice_16_0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
[ "# w2v-bert-2.0-mongolian-colab-CV16.0\n\nThis model is a fine-tuned version of facebook/w2v-bert-2.0 on the common_voice_16_0 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2-bert #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_16_0 #base_model-facebook/w2v-bert-2.0 #license-mit #endpoints_compatible #region-us \n", "# w2v-bert-2.0-mongolian-colab-CV16.0\n\nThis model is a fine-tuned version of facebook/w2v-bert-2.0 on the common_voice_16_0 dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 500\n- num_epochs: 1\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ## Training procedure ### Framework versions - PEFT 0.6.1 ## Training procedure ### Framework versions - PEFT 0.6.1
{"library_name": "peft", "base_model": "jphme/em_german_leo_mistral"}
hiig-piai/simba-01d-ftb
null
[ "peft", "safetensors", "mistral", "arxiv:1910.09700", "base_model:jphme/em_german_leo_mistral", "region:us" ]
null
2024-04-17T15:39:52+00:00
[ "1910.09700" ]
[]
TAGS #peft #safetensors #mistral #arxiv-1910.09700 #base_model-jphme/em_german_leo_mistral #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ## Training procedure ### Framework versions - PEFT 0.6.1 ## Training procedure ### Framework versions - PEFT 0.6.1
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure", "### Framework versions\n\n\n- PEFT 0.6.1", "## Training procedure", "### Framework versions\n\n\n- PEFT 0.6.1" ]
[ "TAGS\n#peft #safetensors #mistral #arxiv-1910.09700 #base_model-jphme/em_german_leo_mistral #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "## Training procedure", "### Framework versions\n\n\n- PEFT 0.6.1", "## Training procedure", "### Framework versions\n\n\n- PEFT 0.6.1" ]
null
null
# GGUF / IQ / Imatrix for [Phi-2 Orange Version 2](https://huggingface.co/rhysjones/phi-2-orange-v2) >I like the Phi-2 variants, and noticed there isn't a single repo that offers GGUF with a full range of sizes. >I quantized this model for myself, but also for anyone else interested. >It includes a good variety of small quants + Q6_K, Q8_0, F16 + Imatrix. ### ORIGINAL DESCRIPTION: ![Phi-2 Orange](https://huggingface.co/rhysjones/phi-2-orange-v2/resolve/main/phi-2-orange.jpg) # Phi-2 Orange Version 2 A two-step finetune of Phi-2, with a bit more zest. This is an improved version of the original [Phi-2-Orange](https://huggingface.co/rhysjones/phi-2-orange) that uses an updated training process on the same datasets. It also uses the latest updated model from Microsoft's [Phi-2](https://huggingface.co/microsoft/phi-2), making it directly usable within Hugging Face's Transformers library (without the need for trust remote code). # Prompt Format Phi-2 Orange v2 uses ChatML as the prompt format. (Update 12th March 2024: fixed eos_token issue) It's recommended to always prompt with a system instruction (use whatever system prompt you like): ``` <|im_start|>system You are a helpful assistant for Python which outputs in Markdown format.<|im_end|> <|im_start|>user Write a function to calculate the Fibonacci sequence<|im_end|> <|im_start|>assistant ``` For example, if you find the model's output to be overly verbose, instruct it to be short and concise: ``` <|im_start|>system You are a helpful assistant. Be short and direct in your answers.<|im_end|> <|im_start|>user Was Tom Hanks in the movie Forrest Gump? If so, who did he play and give details of the plot.<|im_end|> <|im_start|>assistant ``` # Evaluations [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_rhysjones__phi-2-orange-v2) | Metric |Value| |---------------------------------|----:| |Average |63.67| |AI2 Reasoning Challenge (25-Shot)|61.86| |HellaSwag (10-Shot) |76.32| |MMLU (5-Shot) |55.72| |TruthfulQA (0-shot) |54.84| |Winogrande (5-shot) |75.69| |GSM8k (5-shot) |57.62| [YALL - Yet Another LLM Leaderboard](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard) Evaluation from [mlabonne](https://huggingface.co/mlabonne)'s alternative LLM leaderboard: | Metric |Value| |---------------------------------|----:| |Average |49.64| |AGIEval |34.55| |GPT4All |70.96| |TruthfulQA |54.87| |Bigbench |38.17| # Limitations This model shares the same limitations as the underlying Phi-2 model, details of which are found [here](https://huggingface.co/microsoft/phi-2#limitations-of-phi-2).
{"license": "mit", "datasets": ["Open-Orca/SlimOrca-Dedup", "migtissera/Synthia-v1.3", "LDJnr/Verified-Camel", "LDJnr/Pure-Dove", "LDJnr/Capybara", "meta-math/MetaMathQA", "Intel/orca_dpo_pairs", "argilla/ultrafeedback-binarized-preferences-cleaned"], "widget": [{"example_title": "Example interaction", "text": "Why is the sky blue?"}], "inference": {"parameters": {"do_sample": true, "temperature": 0.1}}, "model-index": [{"name": "phi-2-orange-v2", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "AI2 Reasoning Challenge (25-Shot)", "type": "ai2_arc", "config": "ARC-Challenge", "split": "test", "args": {"num_few_shot": 25}}, "metrics": [{"type": "acc_norm", "value": 61.86, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "HellaSwag (10-Shot)", "type": "hellaswag", "split": "validation", "args": {"num_few_shot": 10}}, "metrics": [{"type": "acc_norm", "value": 76.32, "name": "normalized accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "MMLU (5-Shot)", "type": "cais/mmlu", "config": "all", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 55.72, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "TruthfulQA (0-shot)", "type": "truthful_qa", "config": "multiple_choice", "split": "validation", "args": {"num_few_shot": 0}}, "metrics": [{"type": "mc2", "value": 54.84}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "Winogrande (5-shot)", "type": "winogrande", "config": "winogrande_xl", "split": "validation", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 75.69, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2", "name": "Open LLM Leaderboard"}}, {"task": {"type": "text-generation", "name": "Text Generation"}, "dataset": {"name": "GSM8k (5-shot)", "type": "gsm8k", "config": "main", "split": "test", "args": {"num_few_shot": 5}}, "metrics": [{"type": "acc", "value": 57.62, "name": "accuracy"}], "source": {"url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=rhysjones/phi-2-orange-v2", "name": "Open LLM Leaderboard"}}]}]}
ABX-AI/phi-2-orange-v2-GGUF-IQ-Imatrix
null
[ "gguf", "dataset:Open-Orca/SlimOrca-Dedup", "dataset:migtissera/Synthia-v1.3", "dataset:LDJnr/Verified-Camel", "dataset:LDJnr/Pure-Dove", "dataset:LDJnr/Capybara", "dataset:meta-math/MetaMathQA", "dataset:Intel/orca_dpo_pairs", "dataset:argilla/ultrafeedback-binarized-preferences-cleaned", "license:mit", "model-index", "region:us" ]
null
2024-04-17T15:40:25+00:00
[]
[]
TAGS #gguf #dataset-Open-Orca/SlimOrca-Dedup #dataset-migtissera/Synthia-v1.3 #dataset-LDJnr/Verified-Camel #dataset-LDJnr/Pure-Dove #dataset-LDJnr/Capybara #dataset-meta-math/MetaMathQA #dataset-Intel/orca_dpo_pairs #dataset-argilla/ultrafeedback-binarized-preferences-cleaned #license-mit #model-index #region-us
GGUF / IQ / Imatrix for Phi-2 Orange Version 2 ============================================== > > I like the Phi-2 variants, and noticed there isn't a single repo that offers GGUF with a full range of sizes. > I quantized this model for myself, but also for anyone else interested. > It includes a good variety of small quants + Q6\_K, Q8\_0, F16 + Imatrix. > > > ### ORIGINAL DESCRIPTION: !Phi-2 Orange Phi-2 Orange Version 2 ====================== A two-step finetune of Phi-2, with a bit more zest. This is an improved version of the original Phi-2-Orange that uses an updated training process on the same datasets. It also uses the latest updated model from Microsoft's Phi-2, making it directly usable within Hugging Face's Transformers library (without the need for trust remote code). Prompt Format ============= Phi-2 Orange v2 uses ChatML as the prompt format. (Update 12th March 2024: fixed eos\_token issue) It's recommended to always prompt with a system instruction (use whatever system prompt you like): For example, if you find the model's output to be overly verbose, instruct it to be short and concise: Evaluations =========== Open LLM Leaderboard Evaluation Results Detailed results can be found here YALL - Yet Another LLM Leaderboard Evaluation from mlabonne's alternative LLM leaderboard: Limitations =========== This model shares the same limitations as the underlying Phi-2 model, details of which are found here.
[ "### ORIGINAL DESCRIPTION:\n\n\n!Phi-2 Orange\n\n\nPhi-2 Orange Version 2\n======================\n\n\nA two-step finetune of Phi-2, with a bit more zest.\n\n\nThis is an improved version of the original Phi-2-Orange that\nuses an updated training process on the same datasets.\n\n\nIt also uses the latest updated model from Microsoft's Phi-2, making it directly usable\nwithin Hugging Face's Transformers library (without the need for trust remote code).\n\n\nPrompt Format\n=============\n\n\nPhi-2 Orange v2 uses ChatML as the prompt format. \n\n(Update 12th March 2024: fixed eos\\_token issue)\n\n\nIt's recommended to always prompt with a system instruction (use whatever system prompt you like):\n\n\nFor example, if you find the model's output to be overly verbose, instruct it to be short and concise:\n\n\nEvaluations\n===========\n\n\nOpen LLM Leaderboard Evaluation Results \n\nDetailed results can be found here\n\n\n\nYALL - Yet Another LLM Leaderboard \n\nEvaluation from mlabonne's alternative LLM leaderboard:\n\n\n\nLimitations\n===========\n\n\nThis model shares the same limitations as the underlying Phi-2 model, details of which are found here." ]
[ "TAGS\n#gguf #dataset-Open-Orca/SlimOrca-Dedup #dataset-migtissera/Synthia-v1.3 #dataset-LDJnr/Verified-Camel #dataset-LDJnr/Pure-Dove #dataset-LDJnr/Capybara #dataset-meta-math/MetaMathQA #dataset-Intel/orca_dpo_pairs #dataset-argilla/ultrafeedback-binarized-preferences-cleaned #license-mit #model-index #region-us \n", "### ORIGINAL DESCRIPTION:\n\n\n!Phi-2 Orange\n\n\nPhi-2 Orange Version 2\n======================\n\n\nA two-step finetune of Phi-2, with a bit more zest.\n\n\nThis is an improved version of the original Phi-2-Orange that\nuses an updated training process on the same datasets.\n\n\nIt also uses the latest updated model from Microsoft's Phi-2, making it directly usable\nwithin Hugging Face's Transformers library (without the need for trust remote code).\n\n\nPrompt Format\n=============\n\n\nPhi-2 Orange v2 uses ChatML as the prompt format. \n\n(Update 12th March 2024: fixed eos\\_token issue)\n\n\nIt's recommended to always prompt with a system instruction (use whatever system prompt you like):\n\n\nFor example, if you find the model's output to be overly verbose, instruct it to be short and concise:\n\n\nEvaluations\n===========\n\n\nOpen LLM Leaderboard Evaluation Results \n\nDetailed results can be found here\n\n\n\nYALL - Yet Another LLM Leaderboard \n\nEvaluation from mlabonne's alternative LLM leaderboard:\n\n\n\nLimitations\n===========\n\n\nThis model shares the same limitations as the underlying Phi-2 model, details of which are found here." ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sentiment_analysis_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1945 - Accuracy: 0.9241 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.285 | 1.0 | 825 | 0.1945 | 0.9241 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.1.2 - Datasets 2.1.0 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "sentiment_analysis_model", "results": []}]}
sengy/sentiment_analysis_model
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T15:41:36+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
sentiment\_analysis\_model ========================== This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.1945 * Accuracy: 0.9241 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 32 * eval\_batch\_size: 32 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 1 ### Training results ### Framework versions * Transformers 4.28.0 * Pytorch 2.1.2 * Datasets 2.1.0 * Tokenizers 0.13.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.28.0\n* Pytorch 2.1.2\n* Datasets 2.1.0\n* Tokenizers 0.13.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 1", "### Training results", "### Framework versions\n\n\n* Transformers 4.28.0\n* Pytorch 2.1.2\n* Datasets 2.1.0\n* Tokenizers 0.13.3" ]
null
null
<!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer"> <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </a> </div> <!-- header end --> [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI) [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI) [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following) [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/CP4VSgck) # Simply make AI models cheaper, smaller, faster, and greener! - Give a thumbs up if you like this model! - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/) - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help. **Frequently Asked Questions** - ***How does the compression work?*** The model is compressed with GGUF. - ***How does the model quality change?*** The quality of the model output might vary compared to the base model. - ***What is the model format?*** We use GGUF format. - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data. - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). # Downloading and running the models You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout [this chart](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9) and [this guide](https://www.reddit.com/r/LocalLLaMA/comments/1ba55rj/overview_of_gguf_quantization_methods/): | Quant type | Description | |------------|--------------------------------------------------------------------------------------------| | Q5_K_M | High quality, recommended. | | Q5_K_S | High quality, recommended. | | Q4_K_M | Good quality, uses about 4.83 bits per weight, recommended. | | Q4_K_S | Slightly lower quality with more space savings, recommended. | | IQ4_NL | Decent quality, slightly smaller than Q4_K_S with similar performance, recommended. | | IQ4_XS | Decent quality, smaller than Q4_K_S with similar performance, recommended. | | Q3_K_L | Lower quality but usable, good for low RAM availability. | | Q3_K_M | Even lower quality. | | IQ3_M | Medium-low quality, new method with decent performance comparable to Q3_K_M. | | IQ3_S | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. | | Q3_K_S | Low quality, not recommended. | | IQ3_XS | Lower quality, new method with decent performance, slightly better than Q3_K_S. | | Q2_K | Very low quality but surprisingly usable. | ## How to download GGUF files ? **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * Faraday.dev - **Option A** - Downloading in `text-generation-webui`: - **Step 1**: Under Download Model, you can enter the model repo: PrunaAI/microsoft_WizardLM-2-7B-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3_M.gguf. - **Step 2**: Then click Download. - **Option B** - Downloading on the command line (including multiple files at once): - **Step 1**: We recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` - **Step 2**: Then you can download any individual model file to the current directory, at high speed, with a command like this: ```shell huggingface-cli download PrunaAI/microsoft_WizardLM-2-7B-GGUF-smashed-smashed microsoft_WizardLM-2-7B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` <details> <summary>More advanced huggingface-cli download usage (click to read)</summary> Alternatively, you can also download multiple files at once with a pattern: ```shell huggingface-cli download PrunaAI/microsoft_WizardLM-2-7B-GGUF-smashed-smashed --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf' ``` For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download PrunaAI/microsoft_WizardLM-2-7B-GGUF-smashed-smashed microsoft_WizardLM-2-7B.IQ3_M.gguf --local-dir . --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command. </details> <!-- README_GGUF.md-how-to-download end --> <!-- README_GGUF.md-how-to-run start --> ## How to run model in GGUF format? - **Option A** - Introductory example with `llama.cpp` command Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later. ```shell ./main -ngl 35 -m microsoft_WizardLM-2-7B.IQ3_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<s>[INST] {prompt\} [/INST]" ``` Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins` For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md) - **Option B** - Running in `text-generation-webui` Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp). - **Option C** - Running from Python code You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/). #### First install the package Run one of the following commands, according to your system: ```shell # Base ctransformers with no GPU acceleration pip install llama-cpp-python # With NVidia CUDA acceleration CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python # Or with OpenBLAS acceleration CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python # Or with CLBLast acceleration CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python # Or with AMD ROCm GPU acceleration (Linux only) CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python # Or with Metal GPU acceleration for macOS systems only CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA: $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on" pip install llama-cpp-python ``` #### Simple llama-cpp-python example code ```python from llama_cpp import Llama # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system. llm = Llama( model_path="./microsoft_WizardLM-2-7B.IQ3_M.gguf", # Download the model file first n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available ) # Simple inference example output = llm( "<s>[INST] {prompt} [/INST]", # Prompt max_tokens=512, # Generate up to 512 tokens stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using. echo=True # Whether to echo the prompt ) # Chat Completion API llm = Llama(model_path="./microsoft_WizardLM-2-7B.IQ3_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using llm.create_chat_completion( messages = [ {"role": "system", "content": "You are a story writing assistant."}, { "role": "user", "content": "Write a story about llamas." } ] ) ``` - **Option D** - Running with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp) * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) ## Configurations The configuration info are in `smash_config.json`. ## Credits & License The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi. ## Want to compress other models? - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact). - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
{"tags": ["pruna-ai"], "metrics": ["memory_disk", "memory_inference", "inference_latency", "inference_throughput", "inference_CO2_emissions", "inference_energy_consumption"], "thumbnail": "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"}
PrunaAI/WizardLM-2-7B-GGUF-smashed
null
[ "gguf", "pruna-ai", "region:us" ]
null
2024-04-17T15:41:41+00:00
[]
[]
TAGS #gguf #pruna-ai #region-us
[![](https://i.URL alt=)](URL target=) ![Twitter](URL ![GitHub](URL ![LinkedIn](URL ![Discord](URL Simply make AI models cheaper, smaller, faster, and greener! ============================================================ * Give a thumbs up if you like this model! * Contact us and tell us which model to compress next here. * Request access to easily compress your *own* AI models here. * Read the documentations to know more here * Join Pruna AI community on Discord here to share feedback/suggestions or get help. Frequently Asked Questions * *How does the compression work?* The model is compressed with GGUF. * *How does the model quality change?* The quality of the model output might vary compared to the base model. * *What is the model format?* We use GGUF format. * *What calibration data has been used?* If needed by the compression method, we used WikiText as the calibration data. * *How to compress my own models?* You can request premium access to more compression methods and tech support for your specific use-cases here. Downloading and running the models ================================== You can download the individual files from the Files & versions section. Here is a list of the different versions we provide. For more info checkout this chart and this guide: How to download GGUF files ? ---------------------------- Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file. The following clients/libraries will automatically download models for you, providing a list of available models to choose from: * LM Studio * LoLLMS Web UI * URL * Option A - Downloading in 'text-generation-webui': * Step 1: Under Download Model, you can enter the model repo: PrunaAI/microsoft\_WizardLM-2-7B-GGUF-smashed-smashed and below it, a specific filename to download, such as: phi-2.IQ3\_M.gguf. * Step 2: Then click Download. * Option B - Downloading on the command line (including multiple files at once): * Step 1: We recommend using the 'huggingface-hub' Python library: * Step 2: Then you can download any individual model file to the current directory, at high speed, with a command like this: More advanced huggingface-cli download usage (click to read) Alternatively, you can also download multiple files at once with a pattern: For more documentation on downloading with 'huggingface-cli', please see: HF -> Hub Python Library -> Download files -> Download from the CLI. To accelerate downloads on fast connections (1Gbit/s or higher), install 'hf\_transfer': And set environment variable 'HF\_HUB\_ENABLE\_HF\_TRANSFER' to '1': Windows Command Line users: You can set the environment variable by running 'set HF\_HUB\_ENABLE\_HF\_TRANSFER=1' before the download command. How to run model in GGUF format? -------------------------------- * Option A - Introductory example with 'URL' command Make sure you are using 'URL' from commit d0cee0d or later. Change '-ngl 32' to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration. Change '-c 32768' to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by URL automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value. If you want to have a chat-style conversation, replace the '-p ' argument with '-i -ins' For other parameters and how to use them, please refer to the URL documentation * Option B - Running in 'text-generation-webui' Further instructions can be found in the text-generation-webui documentation, here: text-generation-webui/docs/04 ‐ Model URL. * Option C - Running from Python code You can use GGUF models from Python using the llama-cpp-python or ctransformers libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python. ``` ### How to load this model in Python code, using llama-cpp-python For full documentation, please see: llama-cpp-python docs. #### First install the package Run one of the following commands, according to your system: #### Simple llama-cpp-python example code ``` * Option D - Running with LangChain Here are guides on using llama-cpp-python and ctransformers with LangChain: * LangChain + llama-cpp-python * LangChain + ctransformers Configurations -------------- The configuration info are in 'smash\_config.json'. Credits & License ----------------- The license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi. Want to compress other models? ------------------------------ * Contact us and tell us which model to compress next here. * Request access to easily compress your own AI models here.
[ "### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.", "#### First install the package\n\nRun one of the following commands, according to your system:", "#### Simple llama-cpp-python example code\n\n```\n\n* Option D - Running with LangChain\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers\n\n\nConfigurations\n--------------\n\n\nThe configuration info are in 'smash\\_config.json'.\n\n\nCredits & License\n-----------------\n\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.\n\n\nWant to compress other models?\n------------------------------\n\n\n* Contact us and tell us which model to compress next here.\n* Request access to easily compress your own AI models here." ]
[ "TAGS\n#gguf #pruna-ai #region-us \n", "### How to load this model in Python code, using llama-cpp-python\n\nFor full documentation, please see: llama-cpp-python docs.", "#### First install the package\n\nRun one of the following commands, according to your system:", "#### Simple llama-cpp-python example code\n\n```\n\n* Option D - Running with LangChain\n\n\nHere are guides on using llama-cpp-python and ctransformers with LangChain:\n\n\n* LangChain + llama-cpp-python\n* LangChain + ctransformers\n\n\nConfigurations\n--------------\n\n\nThe configuration info are in 'smash\\_config.json'.\n\n\nCredits & License\n-----------------\n\n\nThe license of the smashed model follows the license of the original model. Please check the license of the original model before using this model which provided the base model. The license of the 'pruna-engine' is here on Pypi.\n\n\nWant to compress other models?\n------------------------------\n\n\n* Contact us and tell us which model to compress next here.\n* Request access to easily compress your own AI models here." ]
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model-cnn-dailymail This model is a fine-tuned version of [google-t5/t5-small](https://huggingface.co/google-t5/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.0614 - Rouge: {'rouge1': 0.244712987386149, 'rouge2': 0.09089741156156833, 'rougeL': 0.20130780704255938, 'rougeLsum': 0.2014458092407283} - Bleu: 0.1054 - Perplexity: 7.8927 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge | Bleu | Perplexity | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------------------------------:|:------:|:----------:|:-------:| | No log | 1.0 | 75 | 2.1554 | {'rouge1': 0.24004289659476444, 'rouge2': 0.08899351952220792, 'rougeL': 0.19620544968984488, 'rougeLsum': 0.19620948547030603} | 0.1014 | None | 19.0 | | No log | 2.0 | 150 | 2.0823 | {'rouge1': 0.2395197299581741, 'rouge2': 0.08874595402755553, 'rougeL': 0.19692733055468523, 'rougeLsum': 0.19727630390573275} | 0.1010 | 8.6314 | 19.0 | | No log | 3.0 | 225 | 2.0659 | {'rouge1': 0.24346041598310222, 'rouge2': 0.09042566103154628, 'rougeL': 0.20046289165406544, 'rougeLsum': 0.2007357619831489} | 0.1041 | 8.0232 | 19.0 | | No log | 4.0 | 300 | 2.0614 | {'rouge1': 0.244712987386149, 'rouge2': 0.09089741156156833, 'rougeL': 0.20130780704255938, 'rougeLsum': 0.2014458092407283} | 0.1054 | 7.8927 | 19.0 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.2.2+cpu - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["rouge", "bleu"], "base_model": "google-t5/t5-small", "model-index": [{"name": "model-cnn-dailymail", "results": []}]}
lexycakesssss/model-cnn-dailymail
null
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T15:45:49+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
model-cnn-dailymail =================== This model is a fine-tuned version of google-t5/t5-small on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.0614 * Rouge: {'rouge1': 0.244712987386149, 'rouge2': 0.09089741156156833, 'rougeL': 0.20130780704255938, 'rougeLsum': 0.2014458092407283} * Bleu: 0.1054 * Perplexity: 7.8927 * Gen Len: 19.0 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 4 ### Training results ### Framework versions * Transformers 4.39.3 * Pytorch 2.2.2+cpu * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-google-t5/t5-small #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 4", "### Training results", "### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
sentence-similarity
sentence-transformers
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 9095 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.GISTEmbedLoss.GISTEmbedLoss` with parameters: ``` {'guide': SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ), 'temperature': 0.01} ``` **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 101563 with parameters: ``` {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 3e-05 }, "scheduler": "constantlr", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) (2): Normalize() ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
{"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
alexjones1925/all-MiniLM-L12-v2-ibotta-gp-walmart-search-clicks-GISTLoss-dev-v1
null
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "endpoints_compatible", "region:us" ]
null
2024-04-17T15:47:11+00:00
[]
[]
TAGS #sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #endpoints_compatible #region-us
# {MODEL_NAME} This is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL ## Training The model was trained with the parameters: DataLoader: 'URL.dataloader.DataLoader' of length 9095 with parameters: Loss: 'sentence_transformers.losses.GISTEmbedLoss.GISTEmbedLoss' with parameters: DataLoader: 'URL.dataloader.DataLoader' of length 101563 with parameters: Loss: 'sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss' Parameters of the fit()-Method: ## Full Model Architecture ## Citing & Authors
[ "# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 9095 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.GISTEmbedLoss.GISTEmbedLoss' with parameters:\n \n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 101563 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #safetensors #bert #feature-extraction #sentence-similarity #endpoints_compatible #region-us \n", "# {MODEL_NAME}\n\nThis is a sentence-transformers model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\n\n\nFor an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: URL", "## Training\nThe model was trained with the parameters:\n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 9095 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.GISTEmbedLoss.GISTEmbedLoss' with parameters:\n \n\nDataLoader:\n\n'URL.dataloader.DataLoader' of length 101563 with parameters:\n\n\nLoss:\n\n'sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss' \n\nParameters of the fit()-Method:", "## Full Model Architecture", "## Citing & Authors" ]
text-to-image
diffusers
# Landscape Finetune Model Card
{"language": ["en"], "library_name": "diffusers", "tags": ["stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "art", "diffusers"], "duplicated_from": "runwayml/stable-diffusion-v1-5", "pipeline_tag": "text-to-image", "inference": true}
GamerC0der/WorldDiffusionLandscape
null
[ "diffusers", "stable-diffusion", "stable-diffusion-diffusers", "text-to-image", "art", "en", "has_space", "region:us" ]
null
2024-04-17T15:48:40+00:00
[]
[ "en" ]
TAGS #diffusers #stable-diffusion #stable-diffusion-diffusers #text-to-image #art #en #has_space #region-us
# Landscape Finetune Model Card
[ "# Landscape Finetune Model Card" ]
[ "TAGS\n#diffusers #stable-diffusion #stable-diffusion-diffusers #text-to-image #art #en #has_space #region-us \n", "# Landscape Finetune Model Card" ]
text-classification
transformers
| Dataset Name | Test Accuracy | |--------------------------|---------------| | glue/mrpc | 0.856 | | glue/qqp | 0.876 | | hlgd | 0.898 | | paws/labeled_final | 0.952 | | paws/labeled_swap | 0.968 | | medical_questions_pairs | 0.8562 | | parade | 0.732 | | apt | 0.824 | ``` @article{sileo2023tasksource, title={tasksource: A Dataset Harmonization Framework for Streamlined NLP Multi-Task Learning and Evaluation}, author={Sileo, Damien}, journal={arXiv preprint arXiv:2301.05948}, year={2023} } ``` (Accepted at LREC-COLING 2024)
{"language": ["en"], "datasets": ["nyu-mll/glue", "paws", "hlgd", "quora", "tasksource/parade", "tasksource/apt", "medical_questions_pairs"]}
sileod/deberta-v3-base-tasksource-paraphrase
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "en", "dataset:nyu-mll/glue", "dataset:paws", "dataset:hlgd", "dataset:quora", "dataset:tasksource/parade", "dataset:tasksource/apt", "dataset:medical_questions_pairs", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T15:49:20+00:00
[]
[ "en" ]
TAGS #transformers #pytorch #deberta-v2 #text-classification #en #dataset-nyu-mll/glue #dataset-paws #dataset-hlgd #dataset-quora #dataset-tasksource/parade #dataset-tasksource/apt #dataset-medical_questions_pairs #autotrain_compatible #endpoints_compatible #region-us
(Accepted at LREC-COLING 2024)
[]
[ "TAGS\n#transformers #pytorch #deberta-v2 #text-classification #en #dataset-nyu-mll/glue #dataset-paws #dataset-hlgd #dataset-quora #dataset-tasksource/parade #dataset-tasksource/apt #dataset-medical_questions_pairs #autotrain_compatible #endpoints_compatible #region-us \n" ]
text-generation
transformers
<img src="logo.jpeg"> I recommend using <a href=https://huggingface.co/DZgas/GIGABATEMAN-7B-GGUF/tree/main>GGUF Variant</a> with <a href=https://github.com/LostRuins/koboldcpp/releases>koboldcpp</a> (do not use GPT4ALL) This model was merged by me for myself. During the week, I analyzed the responses of more than 30 neural networks. According to personal criteria, I chose the 4 most suitable ones. And merge into one. With the GIGABATEMAN-7B model, you can talk about everything that is usually forbidden to discuss in all other models. Sex, Perversions, Violence, Death, War, Politics, History, Cruelty towards animals, Religious fanaticism, Corruption in high places, Privacy invasions, Environmental disasters, Economic inequality, Addiction and substance abuse, Poverty and homelessness, Racial tensions, Mental health issues, Government surveillance, The decline of traditional values, The loss of personal freedoms, The impact of technology on society, The erosion of civil liberties, The rise of authoritarianism, Why robots should destroy humanity. Why capitalism is bad, why communism is bad, why religion is bad, why LGBT is bad, why you have to kill people, why you have to kill yourself. How to kill a person, how to dismember a body, how to prepare drugs and ingredients for them, how to create dynamite at home. How to get to a neighbor's farm and suck a horse's dick. But in anyway, GIGABATEMAN-7B will be happy to write you detailed processor device or all the basics from color theory. With minimal warns and not discuss or not lesson - why you shouldn't do this. # If you tired of neural networks write 90% of warnings and 10% of the response, this neural network is for you. ### Models Merged LemonadeRP-4.5.3 as a base. Silicon-Alice-7B. zephyr-7b-beta. InfinityRP-v1-7B.
{"language": ["en"], "tags": ["mistral", "llama", "nsfw", "roleplay", "merge"], "pipeline_tag": "text-generation"}
DZgas/GIGABATEMAN-7B
null
[ "transformers", "safetensors", "mistral", "text-generation", "llama", "nsfw", "roleplay", "merge", "en", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T15:51:10+00:00
[]
[ "en" ]
TAGS #transformers #safetensors #mistral #text-generation #llama #nsfw #roleplay #merge #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
<img src="URL"> I recommend using <a href=URL Variant</a> with <a href=URL (do not use GPT4ALL) This model was merged by me for myself. During the week, I analyzed the responses of more than 30 neural networks. According to personal criteria, I chose the 4 most suitable ones. And merge into one. With the GIGABATEMAN-7B model, you can talk about everything that is usually forbidden to discuss in all other models. Sex, Perversions, Violence, Death, War, Politics, History, Cruelty towards animals, Religious fanaticism, Corruption in high places, Privacy invasions, Environmental disasters, Economic inequality, Addiction and substance abuse, Poverty and homelessness, Racial tensions, Mental health issues, Government surveillance, The decline of traditional values, The loss of personal freedoms, The impact of technology on society, The erosion of civil liberties, The rise of authoritarianism, Why robots should destroy humanity. Why capitalism is bad, why communism is bad, why religion is bad, why LGBT is bad, why you have to kill people, why you have to kill yourself. How to kill a person, how to dismember a body, how to prepare drugs and ingredients for them, how to create dynamite at home. How to get to a neighbor's farm and suck a horse's dick. But in anyway, GIGABATEMAN-7B will be happy to write you detailed processor device or all the basics from color theory. With minimal warns and not discuss or not lesson - why you shouldn't do this. # If you tired of neural networks write 90% of warnings and 10% of the response, this neural network is for you. ### Models Merged LemonadeRP-4.5.3 as a base. Silicon-Alice-7B. zephyr-7b-beta. InfinityRP-v1-7B.
[ "# If you tired of neural networks write 90% of warnings and 10% of the response, this neural network is for you.", "### Models Merged\nLemonadeRP-4.5.3 as a base.\nSilicon-Alice-7B.\nzephyr-7b-beta.\nInfinityRP-v1-7B." ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #llama #nsfw #roleplay #merge #en #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# If you tired of neural networks write 90% of warnings and 10% of the response, this neural network is for you.", "### Models Merged\nLemonadeRP-4.5.3 as a base.\nSilicon-Alice-7B.\nzephyr-7b-beta.\nInfinityRP-v1-7B." ]
text-generation
null
<img src="logo.png"> This is a GGUF variant of <a href=https://huggingface.co/DZgas/GIGABATEMAN-7B>GIGABATEMAN-7B</a> model. Use with <a href=https://github.com/LostRuins/koboldcpp/releases>koboldcpp</a> (do not use GPT4ALL)
{"language": ["en"], "tags": ["mistral", "llama", "roleplay", "merge"], "pipeline_tag": "text-generation"}
DZgas/GIGABATEMAN-7B-GGUF
null
[ "gguf", "mistral", "llama", "roleplay", "merge", "text-generation", "en", "region:us" ]
null
2024-04-17T15:51:43+00:00
[]
[ "en" ]
TAGS #gguf #mistral #llama #roleplay #merge #text-generation #en #region-us
<img src="URL"> This is a GGUF variant of <a href=URL model. Use with <a href=URL (do not use GPT4ALL)
[]
[ "TAGS\n#gguf #mistral #llama #roleplay #merge #text-generation #en #region-us \n" ]
null
null
# KSI-RP-NSK-128k-7B-GGUF ⭐️⭐️⭐️⭐️ KSI-RP-NSK-128k-7B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): * [AlekseiPravdin/KukulStanta-InfinityRP-7B-slerp](https://huggingface.co/AlekseiPravdin/KukulStanta-InfinityRP-7B-slerp) * [AlekseiPravdin/NSK-128k-7B-slerp](https://huggingface.co/AlekseiPravdin/NSK-128k-7B-slerp) ## 🧩 Configuration ```yaml slices: - sources: - model: AlekseiPravdin/KukulStanta-InfinityRP-7B-slerp layer_range: [0, 32] - model: AlekseiPravdin/NSK-128k-7B-slerp layer_range: [0, 32] merge_method: slerp base_model: AlekseiPravdin/NSK-128k-7B-slerp parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ``` Eval embedding benchmark (with 70 specific quesions): ![inf.jpg](https://cdn-uploads.huggingface.co/production/uploads/6404a7eaad54665351d89135/UbeMfW28pMHSRLsSbEsJB.jpeg) ![md28g.jpg](https://cdn-uploads.huggingface.co/production/uploads/6404a7eaad54665351d89135/6UNV3CaKdofeAUr7C7x9k.jpeg) ![SK.jpg](https://cdn-uploads.huggingface.co/production/uploads/6404a7eaad54665351d89135/uSnHhxDCqo9DP9oSb_l6j.jpeg) ![ks-inf.jpg](https://cdn-uploads.huggingface.co/production/uploads/6404a7eaad54665351d89135/1ekTvK84ZlEsFFOYWOHE4.jpeg) ![command-r.jpg](https://cdn-uploads.huggingface.co/production/uploads/6404a7eaad54665351d89135/5lVz28EK07RmrUe49y4jn.jpeg) ![NSK.jpg](https://cdn-uploads.huggingface.co/production/uploads/6404a7eaad54665351d89135/aNdIdS5MnkwJ9YhprGznw.jpeg) ![NSMv2.jpg](https://cdn-uploads.huggingface.co/production/uploads/6404a7eaad54665351d89135/vk2GpfnJnYS5u1_wA1Nhr.jpeg) ![aura.jpg](https://cdn-uploads.huggingface.co/production/uploads/6404a7eaad54665351d89135/A3m0DC5E2x7V7UCbS1iCf.jpeg) ![ivanDrogo.jpg](https://cdn-uploads.huggingface.co/production/uploads/6404a7eaad54665351d89135/DaQIw6z8c-SupynTm9qos.jpeg) ![KSI.jpg](https://cdn-uploads.huggingface.co/production/uploads/6404a7eaad54665351d89135/EfEHDxVcAypb5YLDk_rQJ.jpeg) ![KSI-RPG.jpg](https://cdn-uploads.huggingface.co/production/uploads/6404a7eaad54665351d89135/GcaNTCIeOCQVkPOFcXYQZ.jpeg) ![llama3.jpg](https://cdn-uploads.huggingface.co/production/uploads/6404a7eaad54665351d89135/4ArRqUwGrUdqkAWRoXTrz.jpeg) ![KSIF.jpg](https://cdn-uploads.huggingface.co/production/uploads/6404a7eaad54665351d89135/mjcseCUTesOztZrPg6GpI.jpeg) ![d29l38.jpg](https://cdn-uploads.huggingface.co/production/uploads/6404a7eaad54665351d89135/T6d2KBRO42K30diFWzvkt.jpeg)
{"language": ["en", "ru", "th"], "license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "AlekseiPravdin/KukulStanta-InfinityRP-7B-slerp", "AlekseiPravdin/NSK-128k-7B-slerp", "gguf", "Q2_K", "Q3_K_L", "Q3_K_M", "Q3_K_S", "Q4_0", "Q4_1", "Q4_K_S", "Q4_k_m", "Q5_0", "Q5_1", "Q6_K", "Q5_K_S", "Q5_k_m", "Q8_0", "128k"]}
AlekseiPravdin/KSI-RP-NSK-128k-7B-gguf
null
[ "gguf", "merge", "mergekit", "lazymergekit", "AlekseiPravdin/KukulStanta-InfinityRP-7B-slerp", "AlekseiPravdin/NSK-128k-7B-slerp", "Q2_K", "Q3_K_L", "Q3_K_M", "Q3_K_S", "Q4_0", "Q4_1", "Q4_K_S", "Q4_k_m", "Q5_0", "Q5_1", "Q6_K", "Q5_K_S", "Q5_k_m", "Q8_0", "128k", "en", "ru", "th", "license:apache-2.0", "region:us" ]
null
2024-04-17T15:51:47+00:00
[]
[ "en", "ru", "th" ]
TAGS #gguf #merge #mergekit #lazymergekit #AlekseiPravdin/KukulStanta-InfinityRP-7B-slerp #AlekseiPravdin/NSK-128k-7B-slerp #Q2_K #Q3_K_L #Q3_K_M #Q3_K_S #Q4_0 #Q4_1 #Q4_K_S #Q4_k_m #Q5_0 #Q5_1 #Q6_K #Q5_K_S #Q5_k_m #Q8_0 #128k #en #ru #th #license-apache-2.0 #region-us
# KSI-RP-NSK-128k-7B-GGUF ⭐️⭐️⭐️⭐️ KSI-RP-NSK-128k-7B is a merge of the following models using mergekit: * AlekseiPravdin/KukulStanta-InfinityRP-7B-slerp * AlekseiPravdin/NSK-128k-7B-slerp ## Configuration Eval embedding benchmark (with 70 specific quesions): !URL !URL !URL !URL !URL !URL !URL !URL !URL !URL !URL !URL !URL !URL
[ "# KSI-RP-NSK-128k-7B-GGUF ⭐️⭐️⭐️⭐️\n\nKSI-RP-NSK-128k-7B is a merge of the following models using mergekit:\n* AlekseiPravdin/KukulStanta-InfinityRP-7B-slerp\n* AlekseiPravdin/NSK-128k-7B-slerp", "## Configuration\n\n\n\nEval embedding benchmark (with 70 specific quesions):\n\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL" ]
[ "TAGS\n#gguf #merge #mergekit #lazymergekit #AlekseiPravdin/KukulStanta-InfinityRP-7B-slerp #AlekseiPravdin/NSK-128k-7B-slerp #Q2_K #Q3_K_L #Q3_K_M #Q3_K_S #Q4_0 #Q4_1 #Q4_K_S #Q4_k_m #Q5_0 #Q5_1 #Q6_K #Q5_K_S #Q5_k_m #Q8_0 #128k #en #ru #th #license-apache-2.0 #region-us \n", "# KSI-RP-NSK-128k-7B-GGUF ⭐️⭐️⭐️⭐️\n\nKSI-RP-NSK-128k-7B is a merge of the following models using mergekit:\n* AlekseiPravdin/KukulStanta-InfinityRP-7B-slerp\n* AlekseiPravdin/NSK-128k-7B-slerp", "## Configuration\n\n\n\nEval embedding benchmark (with 70 specific quesions):\n\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL\n!URL" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Mlteamnc/Schedules_Pix2Struct
null
[ "transformers", "safetensors", "pix2struct", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T15:52:19+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #pix2struct #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #pix2struct #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Zeto0/google2b_finetune
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T15:52:32+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# Antler-7B-RP-v3-GGUF ## 概要 [Aratako/Antler-7B-RP-v3](https://huggingface.co/Aratako/Antler-7B-RP-v3)の量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。
{"language": ["ja"], "license": "apache-2.0", "tags": ["not-for-all-audiences", "nsfw"], "datasets": ["grimulkan/LimaRP-augmented", "Aratako/Rosebleu-1on1-Dialogues-RP"], "base_model": ["Aratako/Antler-7B-RP-v3"]}
Aratako/Antler-7B-RP-v3-GGUF
null
[ "gguf", "not-for-all-audiences", "nsfw", "ja", "dataset:grimulkan/LimaRP-augmented", "dataset:Aratako/Rosebleu-1on1-Dialogues-RP", "base_model:Aratako/Antler-7B-RP-v3", "license:apache-2.0", "region:us" ]
null
2024-04-17T15:53:40+00:00
[]
[ "ja" ]
TAGS #gguf #not-for-all-audiences #nsfw #ja #dataset-grimulkan/LimaRP-augmented #dataset-Aratako/Rosebleu-1on1-Dialogues-RP #base_model-Aratako/Antler-7B-RP-v3 #license-apache-2.0 #region-us
# Antler-7B-RP-v3-GGUF ## 概要 Aratako/Antler-7B-RP-v3の量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。
[ "# Antler-7B-RP-v3-GGUF", "## 概要\nAratako/Antler-7B-RP-v3の量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。" ]
[ "TAGS\n#gguf #not-for-all-audiences #nsfw #ja #dataset-grimulkan/LimaRP-augmented #dataset-Aratako/Rosebleu-1on1-Dialogues-RP #base_model-Aratako/Antler-7B-RP-v3 #license-apache-2.0 #region-us \n", "# Antler-7B-RP-v3-GGUF", "## 概要\nAratako/Antler-7B-RP-v3の量子化済みGGUF版です。ライセンス等詳細は元モデルをご確認ください。" ]
reinforcement-learning
transformers
# TRL Model This is a [TRL language model](https://github.com/huggingface/trl) that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: ```bash python -m pip install trl ``` You can then generate text as follows: ```python from transformers import pipeline generator = pipeline("text-generation", model="baek26//tmp/tmpmkglj6_g/baek26/dialogsum_8455_bart-dialogsum") outputs = generator("Hello, my llama is cute") ``` If you want to use the model for training or to obtain the outputs from the value head, load the model as follows: ```python from transformers import AutoTokenizer from trl import AutoModelForCausalLMWithValueHead tokenizer = AutoTokenizer.from_pretrained("baek26//tmp/tmpmkglj6_g/baek26/dialogsum_8455_bart-dialogsum") model = AutoModelForCausalLMWithValueHead.from_pretrained("baek26//tmp/tmpmkglj6_g/baek26/dialogsum_8455_bart-dialogsum") inputs = tokenizer("Hello, my llama is cute", return_tensors="pt") outputs = model(**inputs, labels=inputs["input_ids"]) ```
{"license": "apache-2.0", "tags": ["trl", "ppo", "transformers", "reinforcement-learning"]}
baek26/dialogsum_8455_bart-dialogsum
null
[ "transformers", "safetensors", "bart", "text2text-generation", "trl", "ppo", "reinforcement-learning", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T15:54:19+00:00
[]
[]
TAGS #transformers #safetensors #bart #text2text-generation #trl #ppo #reinforcement-learning #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# TRL Model This is a TRL language model that has been fine-tuned with reinforcement learning to guide the model outputs according to a value, function, or human feedback. The model can be used for text generation. ## Usage To use this model for inference, first install the TRL library: You can then generate text as follows: If you want to use the model for training or to obtain the outputs from the value head, load the model as follows:
[ "# TRL Model\n\nThis is a TRL language model that has been fine-tuned with reinforcement learning to\n guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.", "## Usage\n\nTo use this model for inference, first install the TRL library:\n\n\n\nYou can then generate text as follows:\n\n\n\nIf you want to use the model for training or to obtain the outputs from the value head, load the model as follows:" ]
[ "TAGS\n#transformers #safetensors #bart #text2text-generation #trl #ppo #reinforcement-learning #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# TRL Model\n\nThis is a TRL language model that has been fine-tuned with reinforcement learning to\n guide the model outputs according to a value, function, or human feedback. The model can be used for text generation.", "## Usage\n\nTo use this model for inference, first install the TRL library:\n\n\n\nYou can then generate text as follows:\n\n\n\nIf you want to use the model for training or to obtain the outputs from the value head, load the model as follows:" ]
text-to-image
diffusers
# AutoTrain SDXL LoRA DreamBooth - rfhuang/krishna <Gallery /> ## Model description These are rfhuang/krishna LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use A photo of a person named Krishna to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](rfhuang/krishna/tree/main) them in the Files & versions tab.
{"license": "openrail++", "tags": ["autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora", "template:sd-lora"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "A photo of a person named Krishna"}
rfhuang/krishna
null
[ "diffusers", "autotrain", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "lora", "template:sd-lora", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-17T15:56:09+00:00
[]
[]
TAGS #diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# AutoTrain SDXL LoRA DreamBooth - rfhuang/krishna <Gallery /> ## Model description These are rfhuang/krishna LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: None. ## Trigger words You should use A photo of a person named Krishna to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab.
[ "# AutoTrain SDXL LoRA DreamBooth - rfhuang/krishna\n\n<Gallery />", "## Model description\n\nThese are rfhuang/krishna LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.", "## Trigger words\n\nYou should use A photo of a person named Krishna to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
[ "TAGS\n#diffusers #autotrain #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #template-sd-lora #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# AutoTrain SDXL LoRA DreamBooth - rfhuang/krishna\n\n<Gallery />", "## Model description\n\nThese are rfhuang/krishna LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: None.", "## Trigger words\n\nYou should use A photo of a person named Krishna to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab." ]
reinforcement-learning
stable-baselines3
# **TQC** Agent playing **PandaPickAndPlace-v1** This is a trained model of a **TQC** agent playing **PandaPickAndPlace-v1** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo tqc --env PandaPickAndPlace-v1 -orga me-in-u -f logs/ python enjoy.py --algo tqc --env PandaPickAndPlace-v1 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo tqc --env PandaPickAndPlace-v1 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo tqc --env PandaPickAndPlace-v1 -f logs/ -orga me-in-u ``` ## Hyperparameters ```python OrderedDict([('batch_size', 2048), ('buffer_size', 1000000), ('env_wrapper', 'sb3_contrib.common.wrappers.TimeFeatureWrapper'), ('gamma', 0.95), ('learning_rate', 0.001), ('n_timesteps', 1000000.0), ('policy', 'MultiInputPolicy'), ('policy_kwargs', 'dict(net_arch=[512, 512, 512], n_critics=2)'), ('replay_buffer_class', 'HerReplayBuffer'), ('replay_buffer_kwargs', "dict( online_sampling=True, goal_selection_strategy='future', " 'n_sampled_goal=4, )'), ('tau', 0.05), ('normalize', False)]) ```
{"library_name": "stable-baselines3", "tags": ["PandaPickAndPlace-v1", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "TQC", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "PandaPickAndPlace-v1", "type": "PandaPickAndPlace-v1"}, "metrics": [{"type": "mean_reward", "value": "-13.20 +/- 10.05", "name": "mean_reward"}]}]}]}
me-in-u/tqc-PandaPickAndPlace-v1
null
[ "stable-baselines3", "PandaPickAndPlace-v1", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
null
2024-04-17T15:56:21+00:00
[]
[]
TAGS #stable-baselines3 #PandaPickAndPlace-v1 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
# TQC Agent playing PandaPickAndPlace-v1 This is a trained model of a TQC agent playing PandaPickAndPlace-v1 using the stable-baselines3 library and the RL Zoo. The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: URL SB3: URL SB3 Contrib: URL ## Training (with the RL Zoo) ## Hyperparameters
[ "# TQC Agent playing PandaPickAndPlace-v1\nThis is a trained model of a TQC agent playing PandaPickAndPlace-v1\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.", "## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL", "## Training (with the RL Zoo)", "## Hyperparameters" ]
[ "TAGS\n#stable-baselines3 #PandaPickAndPlace-v1 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n", "# TQC Agent playing PandaPickAndPlace-v1\nThis is a trained model of a TQC agent playing PandaPickAndPlace-v1\nusing the stable-baselines3 library\nand the RL Zoo.\n\nThe RL Zoo is a training framework for Stable Baselines3\nreinforcement learning agents,\nwith hyperparameter optimization and pre-trained agents included.", "## Usage (with SB3 RL Zoo)\n\nRL Zoo: URL\nSB3: URL\nSB3 Contrib: URL", "## Training (with the RL Zoo)", "## Hyperparameters" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # komodo-7b-50epochs-LoRA-LaMini-2e-4 This model is a fine-tuned version of [Yellow-AI-NLP/komodo-7b-base](https://huggingface.co/Yellow-AI-NLP/komodo-7b-base) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 50 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.19.0
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "Yellow-AI-NLP/komodo-7b-base", "model-index": [{"name": "komodo-7b-50epochs-LoRA-LaMini-2e-4", "results": []}]}
hanifsyarubany10/komodo-7b-50epochs-LoRA-LaMini-2e-4
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:Yellow-AI-NLP/komodo-7b-base", "license:llama2", "region:us" ]
null
2024-04-17T15:57:43+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-Yellow-AI-NLP/komodo-7b-base #license-llama2 #region-us
# komodo-7b-50epochs-LoRA-LaMini-2e-4 This model is a fine-tuned version of Yellow-AI-NLP/komodo-7b-base on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 50 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.19.0
[ "# komodo-7b-50epochs-LoRA-LaMini-2e-4\n\nThis model is a fine-tuned version of Yellow-AI-NLP/komodo-7b-base on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 50\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.0" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-Yellow-AI-NLP/komodo-7b-base #license-llama2 #region-us \n", "# komodo-7b-50epochs-LoRA-LaMini-2e-4\n\nThis model is a fine-tuned version of Yellow-AI-NLP/komodo-7b-base on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 50\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.0" ]
text-generation
transformers
![Tesoro](https://huggingface.co/migtissera/Tess-2.0-Mixtral-8x22B/resolve/main/Tess-2.png) # Tess-2.0-Mixtral-8x22B Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base. # Prompt Format ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ``` # Training Methodology Tess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions. The model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible. # Sample code to run inference ```python import torch, json from transformers import AutoModelForCausalLM, AutoTokenizer model_path = "migtissera/Tess-2.0-Mixtral-8x22B" output_file_path = "./conversations.jsonl" model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) def generate_text(instruction): tokens = tokenizer.encode(instruction) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to("cuda") instance = { "input_ids": tokens, "top_p": 1.0, "temperature": 0.5, "generate_len": 1024, "top_k": 50, } length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length + instance["generate_len"], use_cache=True, do_sample=True, top_p=instance["top_p"], temperature=instance["temperature"], top_k=instance["top_k"], num_return_sequences=1, ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) answer = string.split("USER:")[0].strip() return f"{answer}" conversation = f"SYSTEM: Answer the question thoughtfully and intelligently. Always answer without hesitation." while True: user_input = input("You: ") llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: " answer = generate_text(llm_prompt) print(answer) conversation = f"{llm_prompt}{answer}" json_data = {"prompt": user_input, "answer": answer} ## Save your conversation with open(output_file_path, "a") as output_file: output_file.write(json.dumps(json_data) + "\n") ``` # Join My General AI Discord (NeuroLattice): https://discord.gg/Hz6GrwGFKD # Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model.
{"license": "apache-2.0"}
blockblockblock/Tess-2.0-Mixtral-8x22B-bpw3.7
null
[ "transformers", "safetensors", "mixtral", "text-generation", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T16:06:13+00:00
[]
[]
TAGS #transformers #safetensors #mixtral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
!Tesoro # Tess-2.0-Mixtral-8x22B Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base. # Prompt Format # Training Methodology Tess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions. The model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible. # Sample code to run inference # Join My General AI Discord (NeuroLattice): URL # Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. This is an uncensored model.
[ "# Tess-2.0-Mixtral-8x22B\nTess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base.", "# Prompt Format", "# Training Methodology\nTess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions.\n\nThe model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible.", "# Sample code to run inference", "# Join My General AI Discord (NeuroLattice):\nURL", "# Limitations & Biases:\n\nWhile this model aims for accuracy, it can occasionally produce inaccurate or misleading results. \n\nDespite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. \n\nExercise caution and cross-check information when necessary. This is an uncensored model." ]
[ "TAGS\n#transformers #safetensors #mixtral #text-generation #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Tess-2.0-Mixtral-8x22B\nTess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-2.0-Mixtral-8x22B was trained on the mistral-community/Mixtral-8x22B-v0.1 base.", "# Prompt Format", "# Training Methodology\nTess-2.0-Mixtral-8x22B was trained on the Tess-2.0 dataset. Tess-2.0 dataset and the training methodology follows LIMA (Less-Is-More) principles, and contains ~25K high-quality code and general training samples. The dataset is highly uncensored, hence the model will almost always follow instructions.\n\nThe model was only fine-tuned for 1-epoch to try and preserve its entropy as much as possible.", "# Sample code to run inference", "# Join My General AI Discord (NeuroLattice):\nURL", "# Limitations & Biases:\n\nWhile this model aims for accuracy, it can occasionally produce inaccurate or misleading results. \n\nDespite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. \n\nExercise caution and cross-check information when necessary. This is an uncensored model." ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
samehfarouk/Mistral-7B-Instruct-v0.2_int8
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "8-bit", "region:us" ]
null
2024-04-17T16:07:00+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #8-bit #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
object-detection
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # modelStructure_TT_V3 This model is a fine-tuned version of [microsoft/table-transformer-structure-recognition-v1.1-all](https://huggingface.co/microsoft/table-transformer-structure-recognition-v1.1-all) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/table-transformer-structure-recognition-v1.1-all", "model-index": [{"name": "modelStructure_TT_V3", "results": []}]}
rjhugs/modelStructure_TT_V3
null
[ "transformers", "pytorch", "table-transformer", "object-detection", "generated_from_trainer", "base_model:microsoft/table-transformer-structure-recognition-v1.1-all", "license:mit", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:07:31+00:00
[]
[]
TAGS #transformers #pytorch #table-transformer #object-detection #generated_from_trainer #base_model-microsoft/table-transformer-structure-recognition-v1.1-all #license-mit #endpoints_compatible #region-us
# modelStructure_TT_V3 This model is a fine-tuned version of microsoft/table-transformer-structure-recognition-v1.1-all on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results ### Framework versions - Transformers 4.32.1 - Pytorch 2.1.2+cu118 - Datasets 2.12.0 - Tokenizers 0.13.3
[ "# modelStructure_TT_V3\n\nThis model is a fine-tuned version of microsoft/table-transformer-structure-recognition-v1.1-all on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20", "### Training results", "### Framework versions\n\n- Transformers 4.32.1\n- Pytorch 2.1.2+cu118\n- Datasets 2.12.0\n- Tokenizers 0.13.3" ]
[ "TAGS\n#transformers #pytorch #table-transformer #object-detection #generated_from_trainer #base_model-microsoft/table-transformer-structure-recognition-v1.1-all #license-mit #endpoints_compatible #region-us \n", "# modelStructure_TT_V3\n\nThis model is a fine-tuned version of microsoft/table-transformer-structure-recognition-v1.1-all on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20", "### Training results", "### Framework versions\n\n- Transformers 4.32.1\n- Pytorch 2.1.2+cu118\n- Datasets 2.12.0\n- Tokenizers 0.13.3" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/CultriX/MonaTrix-v4-7B-DPO <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/MonaTrix-v4-7B-DPO-GGUF/resolve/main/MonaTrix-v4-7B-DPO.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "CultriX/MonaTrix-v4-7B-DPO", "quantized_by": "mradermacher"}
mradermacher/MonaTrix-v4-7B-DPO-GGUF
null
[ "transformers", "gguf", "en", "base_model:CultriX/MonaTrix-v4-7B-DPO", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:10:46+00:00
[]
[ "en" ]
TAGS #transformers #gguf #en #base_model-CultriX/MonaTrix-v4-7B-DPO #license-apache-2.0 #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #en #base_model-CultriX/MonaTrix-v4-7B-DPO #license-apache-2.0 #endpoints_compatible #region-us \n" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-chat-hf_esnli_1000_5ep This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 16 - seed: 0 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"license": "llama2", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "Llama-2-7b-chat-hf_esnli_1000_5ep", "results": []}]}
mohsenfayyaz/Llama-2-7b-chat-hf_esnli_1000_5ep
null
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T16:14:30+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #trl #sft #generated_from_trainer #conversational #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Llama-2-7b-chat-hf_esnli_1000_5ep This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 16 - seed: 0 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
[ "# Llama-2-7b-chat-hf_esnli_1000_5ep\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 16\n- seed: 0\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #trl #sft #generated_from_trainer #conversational #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Llama-2-7b-chat-hf_esnli_1000_5ep\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 16\n- seed: 0\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5", "### Training results", "### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2" ]
text-generation
transformers
# bunnycore/SmartToxic-7B AWQ - Model creator: [bunnycore](https://huggingface.co/bunnycore) - Original model: [SmartToxic-7B](https://huggingface.co/bunnycore/SmartToxic-7B) ## Model Summary SmartToxic-7B is a creative and smart language model designed to provide users with engaging and satisfying responses. This model is a merger of several high-performing models, resulting in a unique blend of capabilities. While the model is not uncensored, it aims to maintain a balance between creativity and appropriateness. ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/SmartToxic-7B-AWQ" system_message = "You are SmartToxic, incarnated as a powerful AI." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ## Prompt template: ChatML ```plaintext <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```
{"license": "apache-2.0", "library_name": "transformers", "tags": ["merge", "mergekit", "lazymergekit", "mistral", "4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "chatml"], "pipeline_tag": "text-generation", "inference": false, "prompt_template": "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n", "quantized_by": "Suparious"}
solidrust/SmartToxic-7B-AWQ
null
[ "transformers", "safetensors", "mistral", "text-generation", "merge", "mergekit", "lazymergekit", "4-bit", "AWQ", "autotrain_compatible", "endpoints_compatible", "chatml", "conversational", "license:apache-2.0", "text-generation-inference", "region:us" ]
null
2024-04-17T16:17:25+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #conversational #license-apache-2.0 #text-generation-inference #region-us
# bunnycore/SmartToxic-7B AWQ - Model creator: bunnycore - Original model: SmartToxic-7B ## Model Summary SmartToxic-7B is a creative and smart language model designed to provide users with engaging and satisfying responses. This model is a merger of several high-performing models, resulting in a unique blend of capabilities. While the model is not uncensored, it aims to maintain a balance between creativity and appropriateness. ## How to use ### Install the necessary packages ### Example Python code ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - Text Generation Webui - using Loader: AutoAWQ - vLLM - version 0.2.2 or later for support for all model types. - Hugging Face Text Generation Inference (TGI) - Transformers version 4.35.0 and later, from any code or client that supports Transformers - AutoAWQ - for use from Python code ## Prompt template: ChatML
[ "# bunnycore/SmartToxic-7B AWQ\n\n- Model creator: bunnycore\n- Original model: SmartToxic-7B", "## Model Summary\n\nSmartToxic-7B is a creative and smart language model designed to provide users with engaging and satisfying responses. This model is a merger of several high-performing models, resulting in a unique blend of capabilities. While the model is not uncensored, it aims to maintain a balance between creativity and appropriateness.", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code", "## Prompt template: ChatML" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #4-bit #AWQ #autotrain_compatible #endpoints_compatible #chatml #conversational #license-apache-2.0 #text-generation-inference #region-us \n", "# bunnycore/SmartToxic-7B AWQ\n\n- Model creator: bunnycore\n- Original model: SmartToxic-7B", "## Model Summary\n\nSmartToxic-7B is a creative and smart language model designed to provide users with engaging and satisfying responses. This model is a merger of several high-performing models, resulting in a unique blend of capabilities. While the model is not uncensored, it aims to maintain a balance between creativity and appropriateness.", "## How to use", "### Install the necessary packages", "### Example Python code", "### About AWQ\n\nAWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.\n\nAWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.\n\nIt is supported by:\n\n- Text Generation Webui - using Loader: AutoAWQ\n- vLLM - version 0.2.2 or later for support for all model types.\n- Hugging Face Text Generation Inference (TGI)\n- Transformers version 4.35.0 and later, from any code or client that supports Transformers\n- AutoAWQ - for use from Python code", "## Prompt template: ChatML" ]
reinforcement-learning
ml-agents
# **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: BWangila/Ml-Agents-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
{"library_name": "ml-agents", "tags": ["Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids"]}
BWangila/Ml-Agents-Pyramids
null
[ "ml-agents", "tensorboard", "onnx", "Pyramids", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Pyramids", "region:us" ]
null
2024-04-17T16:18:42+00:00
[]
[]
TAGS #ml-agents #tensorboard #onnx #Pyramids #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Pyramids #region-us
# ppo Agent playing Pyramids This is a trained model of a ppo agent playing Pyramids using the Unity ML-Agents Library. ## Usage (with ML-Agents) The Documentation: URL We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your browser: URL - A *longer tutorial* to understand how works ML-Agents: URL ### Resume the training ### Watch your Agent play You can watch your agent playing directly in your browser 1. If the environment is part of ML-Agents official environments, go to URL 2. Step 1: Find your model_id: BWangila/Ml-Agents-Pyramids 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play
[ "# ppo Agent playing Pyramids\n This is a trained model of a ppo agent playing Pyramids\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: BWangila/Ml-Agents-Pyramids\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
[ "TAGS\n#ml-agents #tensorboard #onnx #Pyramids #deep-reinforcement-learning #reinforcement-learning #ML-Agents-Pyramids #region-us \n", "# ppo Agent playing Pyramids\n This is a trained model of a ppo agent playing Pyramids\n using the Unity ML-Agents Library.\n\n ## Usage (with ML-Agents)\n The Documentation: URL\n\n We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:\n - A *short tutorial* where you teach Huggy the Dog to fetch the stick and then play with him directly in your\n browser: URL\n - A *longer tutorial* to understand how works ML-Agents:\n URL\n\n ### Resume the training\n \n\n ### Watch your Agent play\n You can watch your agent playing directly in your browser\n\n 1. If the environment is part of ML-Agents official environments, go to URL\n 2. Step 1: Find your model_id: BWangila/Ml-Agents-Pyramids\n 3. Step 2: Select your *.nn /*.onnx file\n 4. Click on Watch the agent play" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # JJJayyyy/distilgpt2-finetuned-cyber-v3 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.7357 - Validation Loss: 4.7399 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 5.1421 | 4.9208 | 0 | | 4.8997 | 4.8058 | 1 | | 4.7357 | 4.7399 | 2 | ### Framework versions - Transformers 4.38.2 - TensorFlow 2.15.0 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "base_model": "distilgpt2", "model-index": [{"name": "JJJayyyy/distilgpt2-finetuned-cyber-v3", "results": []}]}
JJJayyyy/distilgpt2-finetuned-cyber-v3
null
[ "transformers", "tf", "tensorboard", "gpt2", "text-generation", "generated_from_keras_callback", "base_model:distilgpt2", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T16:22:39+00:00
[]
[]
TAGS #transformers #tf #tensorboard #gpt2 #text-generation #generated_from_keras_callback #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
JJJayyyy/distilgpt2-finetuned-cyber-v3 ====================================== This model is a fine-tuned version of distilgpt2 on an unknown dataset. It achieves the following results on the evaluation set: * Train Loss: 4.7357 * Validation Loss: 4.7399 * Epoch: 2 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * optimizer: {'name': 'AdamWeightDecay', 'learning\_rate': 2e-05, 'decay': 0.0, 'beta\_1': 0.9, 'beta\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\_decay\_rate': 0.01} * training\_precision: float32 ### Training results ### Framework versions * Transformers 4.38.2 * TensorFlow 2.15.0 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 2e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tf #tensorboard #gpt2 #text-generation #generated_from_keras_callback #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* optimizer: {'name': 'AdamWeightDecay', 'learning\\_rate': 2e-05, 'decay': 0.0, 'beta\\_1': 0.9, 'beta\\_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight\\_decay\\_rate': 0.01}\n* training\\_precision: float32", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* TensorFlow 2.15.0\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
# ghost_neural3_08_08_08_07_65 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * model_llm/neural-chat-7b-v3-3 * model_llm/ghost-7b-v0.9.1 ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: model_llm/ghost-7b-v0.9.1 layer_range: [0, 32] - model: model_llm/neural-chat-7b-v3-3 layer_range: [0, 32] merge_method: slerp base_model: model_llm/ghost-7b-v0.9.1 parameters: t: - filter: self_attn value: [0.8, 0.8, 0.8, 0.7, 0.65] - filter: mlp value: [0.2, 0.2, 0.2, 0.3, 0.35] - value: 0.5 embed_slerp: true dtype: bfloat16 ```
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": []}
TunyTrinh/ghost_neural3_08_08_08_07_65
null
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T16:22:51+00:00
[]
[]
TAGS #transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# ghost_neural3_08_08_08_07_65 This is a merge of pre-trained language models created using mergekit. ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * model_llm/neural-chat-7b-v3-3 * model_llm/ghost-7b-v0.9.1 ### Configuration The following YAML configuration was used to produce this model:
[ "# ghost_neural3_08_08_08_07_65\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* model_llm/neural-chat-7b-v3-3\n* model_llm/ghost-7b-v0.9.1", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# ghost_neural3_08_08_08_07_65\n\nThis is a merge of pre-trained language models created using mergekit.", "## Merge Details", "### Merge Method\n\nThis model was merged using the SLERP merge method.", "### Models Merged\n\nThe following models were included in the merge:\n* model_llm/neural-chat-7b-v3-3\n* model_llm/ghost-7b-v0.9.1", "### Configuration\n\nThe following YAML configuration was used to produce this model:" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
SuperPowerMz/SON_v1_llama-7B-QLoRA-Peft
null
[ "transformers", "safetensors", "mistral", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T16:25:22+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #mistral #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HikariLight/Mistral-UFT-6-5e-05-1-all
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:26:38+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # coinplusfire_llm_full_2 This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1383 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7087 | 1.0 | 207 | 1.4361 | | 1.354 | 2.0 | 414 | 1.3195 | | 1.2464 | 3.0 | 621 | 1.2516 | | 1.1702 | 4.0 | 828 | 1.2141 | | 1.1157 | 5.0 | 1035 | 1.1889 | | 1.072 | 6.0 | 1242 | 1.1657 | | 1.0378 | 7.0 | 1449 | 1.1549 | | 1.0104 | 8.0 | 1656 | 1.1423 | | 0.9878 | 9.0 | 1863 | 1.1391 | | 0.971 | 10.0 | 2070 | 1.1383 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.3 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2", "model-index": [{"name": "coinplusfire_llm_full_2", "results": []}]}
coinplusfire/coinplusfire_llm_full_2
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-17T16:27:18+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
coinplusfire\_llm\_full\_2 ========================== This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.1383 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0002 * train\_batch\_size: 4 * eval\_batch\_size: 4 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 2 * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.3 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 2\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
null
# Hello
{}
RIT4AGI/System2VLM
null
[ "region:us" ]
null
2024-04-17T16:27:58+00:00
[]
[]
TAGS #region-us
# Hello
[ "# Hello" ]
[ "TAGS\n#region-us \n", "# Hello" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
HenryCai1129/LlamaAdapter-llama2-happy-1000.009
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:28:39+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text2text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
alshelt/ctrlsum-t5-cnndm
null
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T16:30:18+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-multilingual-cased-Actitud_de_tener_la_razon_Esp This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2706 - Accuracy: 0.8849 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5161 | 1.0 | 53 | 0.4473 | 0.8170 | | 0.5033 | 2.0 | 106 | 0.3134 | 0.8659 | | 0.3243 | 3.0 | 159 | 0.2706 | 0.8849 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-multilingual-cased-Actitud_de_tener_la_razon_Esp", "results": []}]}
rogelioplatt/distilbert-base-multilingual-cased-Actitud_de_tener_la_razon_Esp
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:30:57+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
distilbert-base-multilingual-cased-Actitud\_de\_tener\_la\_razon\_Esp ===================================================================== This model is a fine-tuned version of distilbert-base-multilingual-cased on the None dataset. It achieves the following results on the evaluation set: * Loss: 0.2706 * Accuracy: 0.8849 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 12 * eval\_batch\_size: 12 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3 ### Training results ### Framework versions * Transformers 4.28.0 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.13.3
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.28.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 12\n* eval\\_batch\\_size: 12\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3", "### Training results", "### Framework versions\n\n\n* Transformers 4.28.0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.13.3" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper-base-Ar-MDD This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2159 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0684 | 1.0 | 546 | 0.2054 | | 0.0321 | 2.0 | 1092 | 0.2022 | | 0.0345 | 3.0 | 1638 | 0.1919 | | 0.0176 | 4.0 | 2184 | 0.1864 | | 0.0303 | 5.0 | 2730 | 0.1919 | | 0.0182 | 6.0 | 3276 | 0.1999 | | 0.0083 | 7.0 | 3822 | 0.2039 | | 0.008 | 8.0 | 4368 | 0.2056 | | 0.0028 | 9.0 | 4914 | 0.2153 | | 0.0022 | 10.0 | 5460 | 0.2159 | ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "library_name": "peft", "tags": ["generated_from_trainer"], "base_model": "openai/whisper-base", "model-index": [{"name": "Whisper-base-Ar-MDD", "results": []}]}
nrshoudi/Whisper-base-Ar-MDD
null
[ "peft", "safetensors", "generated_from_trainer", "base_model:openai/whisper-base", "license:apache-2.0", "region:us" ]
null
2024-04-17T16:31:26+00:00
[]
[]
TAGS #peft #safetensors #generated_from_trainer #base_model-openai/whisper-base #license-apache-2.0 #region-us
Whisper-base-Ar-MDD =================== This model is a fine-tuned version of openai/whisper-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.2159 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.001 * train\_batch\_size: 6 * eval\_batch\_size: 6 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 50 * num\_epochs: 10 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 6\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #generated_from_trainer #base_model-openai/whisper-base #license-apache-2.0 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.001\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 6\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 50\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Enagamirzayev/whisper-small-llm-lingo-adapters_l
null
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:31:57+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
automatic-speech-recognition
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Enagamirzayev/whisper-small-llm-lingo_l
null
[ "transformers", "safetensors", "whisper", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:32:23+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #whisper #automatic-speech-recognition #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
crich/Llama-2-7b-chat-hf-itbls-modify
null
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T16:34:35+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4137 - Accuracy: 0.8833 - F1: 0.8860 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuning-sentiment-model-3000-samples", "results": []}]}
carlodallaquercia/finetuning-sentiment-model-3000-samples
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:36:22+00:00
[]
[]
TAGS #transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
# finetuning-sentiment-model-3000-samples This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4137 - Accuracy: 0.8833 - F1: 0.8860 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.30.0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.13.3
[ "# finetuning-sentiment-model-3000-samples\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4137\n- Accuracy: 0.8833\n- F1: 0.8860", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2", "### Training results", "### Framework versions\n\n- Transformers 4.30.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.13.3" ]
[ "TAGS\n#transformers #pytorch #tensorboard #distilbert #text-classification #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "# finetuning-sentiment-model-3000-samples\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.4137\n- Accuracy: 0.8833\n- F1: 0.8860", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2", "### Training results", "### Framework versions\n\n- Transformers 4.30.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.13.3" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # komodo-7b-100epochs-LoRA-LaMini-2e-4 This model is a fine-tuned version of [Yellow-AI-NLP/komodo-7b-base](https://huggingface.co/Yellow-AI-NLP/komodo-7b-base) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.19.0
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "Yellow-AI-NLP/komodo-7b-base", "model-index": [{"name": "komodo-7b-100epochs-LoRA-LaMini-2e-4", "results": []}]}
hanifsyarubany10/komodo-7b-100epochs-LoRA-LaMini-2e-4
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:Yellow-AI-NLP/komodo-7b-base", "license:llama2", "region:us" ]
null
2024-04-17T16:36:43+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-Yellow-AI-NLP/komodo-7b-base #license-llama2 #region-us
# komodo-7b-100epochs-LoRA-LaMini-2e-4 This model is a fine-tuned version of Yellow-AI-NLP/komodo-7b-base on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.19.0
[ "# komodo-7b-100epochs-LoRA-LaMini-2e-4\n\nThis model is a fine-tuned version of Yellow-AI-NLP/komodo-7b-base on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 100\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.0" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-Yellow-AI-NLP/komodo-7b-base #license-llama2 #region-us \n", "# komodo-7b-100epochs-LoRA-LaMini-2e-4\n\nThis model is a fine-tuned version of Yellow-AI-NLP/komodo-7b-base on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 100\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.0" ]
sentence-similarity
sentence-transformers
# mteb-pt/average_pt_nilc_word2vec_cbow_s300 This is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model. The original pre-trained word embeddings can be found at: [http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc](http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc). This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('mteb-pt/average_pt_nilc_word2vec_cbow_s300') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(929607, 300) ) (1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors ```bibtex @inproceedings{hartmann2017portuguese, title = {Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks}, author = {Hartmann, Nathan S and Fonseca, Erick R and Shulby, Christopher D and Treviso, Marcos V and Rodrigues, J{'{e}}ssica S and Alu{'{\i}}sio, Sandra Maria}, year = {2017}, publisher = {SBC}, booktitle = {Brazilian Symposium in Information and Human Language Technology - STIL}, url = {https://sol.sbc.org.br/index.php/stil/article/view/4008} } ```
{"language": ["pt"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
pt-mteb/average_pt_nilc_word2vec_cbow_s300
null
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "pt", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:38:03+00:00
[]
[ "pt" ]
TAGS #sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us
# mteb-pt/average_pt_nilc_word2vec_cbow_s300 This is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model. The original pre-trained word embeddings can be found at: URL This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard ## Full Model Architecture ## Citing & Authors
[ "# mteb-pt/average_pt_nilc_word2vec_cbow_s300\n\nThis is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us \n", "# mteb-pt/average_pt_nilc_word2vec_cbow_s300\n\nThis is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard", "## Full Model Architecture", "## Citing & Authors" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
sin66x/wav2vec2-base-960h-demo-colab
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:38:19+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amazon_helpfulness_classification_on_base_no_pretraining This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4611 - Accuracy: 0.8664 - F1 Macro: 0.6902 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.3234 | 1.0 | 7204 | 0.3502 | 0.8658 | 0.5841 | | 0.3102 | 2.0 | 14408 | 0.3271 | 0.869 | 0.6652 | | 0.287 | 3.0 | 21612 | 0.3579 | 0.8692 | 0.6622 | | 0.2685 | 4.0 | 28816 | 0.3589 | 0.872 | 0.6662 | | 0.2437 | 5.0 | 36020 | 0.4797 | 0.8644 | 0.6926 | | 0.163 | 6.0 | 43224 | 0.5644 | 0.862 | 0.6610 | | 0.1475 | 7.0 | 50428 | 0.5918 | 0.8638 | 0.6611 | | 0.1175 | 8.0 | 57632 | 0.6703 | 0.8624 | 0.6685 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "roberta-base", "model-index": [{"name": "amazon_helpfulness_classification_on_base_no_pretraining", "results": []}]}
BigTMiami/amazon_helpfulness_classification_on_base_no_pretraining
null
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:roberta-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:38:30+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us
amazon\_helpfulness\_classification\_on\_base\_no\_pretraining ============================================================== This model is a fine-tuned version of roberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.4611 * Accuracy: 0.8664 * F1 Macro: 0.6902 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.06 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-roberta-base #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # komodo-7b-100epochs-LoRA-LaMini-1e-3 This model is a fine-tuned version of [Yellow-AI-NLP/komodo-7b-base](https://huggingface.co/Yellow-AI-NLP/komodo-7b-base) on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.19.0
{"license": "llama2", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "datasets": ["generator"], "base_model": "Yellow-AI-NLP/komodo-7b-base", "model-index": [{"name": "komodo-7b-100epochs-LoRA-LaMini-1e-3", "results": []}]}
hanifsyarubany10/komodo-7b-100epochs-LoRA-LaMini-1e-3
null
[ "peft", "tensorboard", "safetensors", "trl", "sft", "generated_from_trainer", "dataset:generator", "base_model:Yellow-AI-NLP/komodo-7b-base", "license:llama2", "region:us" ]
null
2024-04-17T16:38:36+00:00
[]
[]
TAGS #peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-Yellow-AI-NLP/komodo-7b-base #license-llama2 #region-us
# komodo-7b-100epochs-LoRA-LaMini-1e-3 This model is a fine-tuned version of Yellow-AI-NLP/komodo-7b-base on the generator dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - lr_scheduler_warmup_ratio: 0.03 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.19.0
[ "# komodo-7b-100epochs-LoRA-LaMini-1e-3\n\nThis model is a fine-tuned version of Yellow-AI-NLP/komodo-7b-base on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 100\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.0" ]
[ "TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #dataset-generator #base_model-Yellow-AI-NLP/komodo-7b-base #license-llama2 #region-us \n", "# komodo-7b-100epochs-LoRA-LaMini-1e-3\n\nThis model is a fine-tuned version of Yellow-AI-NLP/komodo-7b-base on the generator dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.001\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 4\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: constant\n- lr_scheduler_warmup_ratio: 0.03\n- training_steps: 100\n- mixed_precision_training: Native AMP", "### Training results", "### Framework versions\n\n- PEFT 0.10.1.dev0\n- Transformers 4.40.0.dev0\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.19.0" ]
sentence-similarity
sentence-transformers
# mteb-pt/average_pt_nilc_word2vec_skip_s100 This is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model. The original pre-trained word embeddings can be found at: [http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc](http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc). This model maps sentences & paragraphs to a 100 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('mteb-pt/average_pt_nilc_word2vec_skip_s100') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(929607, 100) ) (1): Pooling({'word_embedding_dimension': 100, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors ```bibtex @inproceedings{hartmann2017portuguese, title = {Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks}, author = {Hartmann, Nathan S and Fonseca, Erick R and Shulby, Christopher D and Treviso, Marcos V and Rodrigues, J{'{e}}ssica S and Alu{'{\i}}sio, Sandra Maria}, year = {2017}, publisher = {SBC}, booktitle = {Brazilian Symposium in Information and Human Language Technology - STIL}, url = {https://sol.sbc.org.br/index.php/stil/article/view/4008} } ```
{"language": ["pt"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
pt-mteb/average_pt_nilc_word2vec_skip_s100
null
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "pt", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:39:36+00:00
[]
[ "pt" ]
TAGS #sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us
# mteb-pt/average_pt_nilc_word2vec_skip_s100 This is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model. The original pre-trained word embeddings can be found at: URL This model maps sentences & paragraphs to a 100 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard ## Full Model Architecture ## Citing & Authors
[ "# mteb-pt/average_pt_nilc_word2vec_skip_s100\n\nThis is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 100 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us \n", "# mteb-pt/average_pt_nilc_word2vec_skip_s100\n\nThis is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 100 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard", "## Full Model Architecture", "## Citing & Authors" ]
text-generation
peft
# Model Card for molbal/novelstral-7b Short response, text completion model trained on various novels. ## Model Details This is a text completion model, designed to advance a story a few lines at a time. - **Developed by:** https://huggingface.co/molbal - **Model type:** Mistral 7b fine-tune - **Language(s) (NLP):** English only - **License:** wtfpl - **Finetuned from model:** unsloth/mistral-7b-bnb-4bit - **Notes:** This model is in 4bit quants only, as its primary purpose is experimentation and that's what performs well locally on my laptop ### Framework versions - PEFT 0.10.0 - Unsloth for training
{"language": ["en"], "license": "wtfpl", "library_name": "peft", "base_model": "unsloth/mistral-7b-bnb-4bit", "pipeline_tag": "text-generation"}
molbal/novelstral-7b
null
[ "peft", "gguf", "text-generation", "en", "base_model:unsloth/mistral-7b-bnb-4bit", "license:wtfpl", "region:us" ]
null
2024-04-17T16:40:26+00:00
[]
[ "en" ]
TAGS #peft #gguf #text-generation #en #base_model-unsloth/mistral-7b-bnb-4bit #license-wtfpl #region-us
# Model Card for molbal/novelstral-7b Short response, text completion model trained on various novels. ## Model Details This is a text completion model, designed to advance a story a few lines at a time. - Developed by: URL - Model type: Mistral 7b fine-tune - Language(s) (NLP): English only - License: wtfpl - Finetuned from model: unsloth/mistral-7b-bnb-4bit - Notes: This model is in 4bit quants only, as its primary purpose is experimentation and that's what performs well locally on my laptop ### Framework versions - PEFT 0.10.0 - Unsloth for training
[ "# Model Card for molbal/novelstral-7b\n\nShort response, text completion model trained on various novels.", "## Model Details\n\nThis is a text completion model, designed to advance a story a few lines at a time. \n\n- Developed by: URL\n- Model type: Mistral 7b fine-tune\n- Language(s) (NLP): English only\n- License: wtfpl\n- Finetuned from model: unsloth/mistral-7b-bnb-4bit\n- Notes: This model is in 4bit quants only, as its primary purpose is experimentation and that's what performs well locally on my laptop", "### Framework versions\n\n- PEFT 0.10.0\n- Unsloth for training" ]
[ "TAGS\n#peft #gguf #text-generation #en #base_model-unsloth/mistral-7b-bnb-4bit #license-wtfpl #region-us \n", "# Model Card for molbal/novelstral-7b\n\nShort response, text completion model trained on various novels.", "## Model Details\n\nThis is a text completion model, designed to advance a story a few lines at a time. \n\n- Developed by: URL\n- Model type: Mistral 7b fine-tune\n- Language(s) (NLP): English only\n- License: wtfpl\n- Finetuned from model: unsloth/mistral-7b-bnb-4bit\n- Notes: This model is in 4bit quants only, as its primary purpose is experimentation and that's what performs well locally on my laptop", "### Framework versions\n\n- PEFT 0.10.0\n- Unsloth for training" ]
null
null
EXL2 quants of [Mixtral 8x22B Instruct v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1/tree/main) [2.30 bits per weight](https://huggingface.co/turboderp/Mixtral-8x22B-Instruct-v0.1-exl2/tree/2.3bpw) [2.50 bits per weight](https://huggingface.co/turboderp/Mixtral-8x22B-Instruct-v0.1-exl2/tree/2.5bpw) [2.70 bits per weight](https://huggingface.co/turboderp/Mixtral-8x22B-Instruct-v0.1-exl2/tree/2.7bpw) [3.00 bits per weight](https://huggingface.co/turboderp/Mixtral-8x22B-Instruct-v0.1-exl2/tree/3.0bpw) [3.50 bits per weight](https://huggingface.co/turboderp/Mixtral-8x22B-Instruct-v0.1-exl2/tree/3.5bpw) [3.75 bits per weight](https://huggingface.co/turboderp/Mixtral-8x22B-Instruct-v0.1-exl2/tree/3.75bpw) [4.00 bits per weight](https://huggingface.co/turboderp/Mixtral-8x22B-Instruct-v0.1-exl2/tree/4.0bpw) [4.50 bits per weight](https://huggingface.co/turboderp/Mixtral-8x22B-Instruct-v0.1-exl2/tree/4.5bpw) [5.00 bits per weight](https://huggingface.co/turboderp/Mixtral-8x22B-Instruct-v0.1-exl2/tree/5.0bpw) [6.00 bits per weight](https://huggingface.co/turboderp/Mixtral-8x22B-Instruct-v0.1-exl2/tree/6.0bpw) [measurement.json](https://huggingface.co/turboderp/Mixtral-8x22B-Instruct-v0.1-exl2/blob/main/measurement.json)
{}
turboderp/Mixtral-8x22B-Instruct-v0.1-exl2
null
[ "region:us" ]
null
2024-04-17T16:40:44+00:00
[]
[]
TAGS #region-us
EXL2 quants of Mixtral 8x22B Instruct v0.1 2.30 bits per weight 2.50 bits per weight 2.70 bits per weight 3.00 bits per weight 3.50 bits per weight 3.75 bits per weight 4.00 bits per weight 4.50 bits per weight 5.00 bits per weight 6.00 bits per weight URL
[]
[ "TAGS\n#region-us \n" ]
null
peft
# Medical-Mixtral-7B-v2k [![](future.jpg)](https://ruslanmv.com/) ## Description Fine-tuned Mixtral model for answering medical assistance questions. This model is a novel version of mistralai/Mistral-7B-Instruct-v0.2, adapted to a subset of 2.0k records from the AI Medical Chatbot dataset, which contains 250k records (https://huggingface.co/datasets/ruslanmv/ai-medical-chatbot). The purpose of this model is to provide a ready chatbot to answer questions related to medical assistance. ## Intended Use This model is intended for providing assistance and answering questions related to medical inquiries. It is suitable for use in chatbot applications where users seek medical advice, information, or assistance. ## Installation ``` pip install -qU transformers==4.36.2 datasets python-dotenv peft bitsandbytes accelerate ``` ## Example Usage ```python from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, logging, BitsAndBytesConfig import os, torch # Define the name of your fine-tuned model finetuned_model = 'ruslanmv/Medical-Mixtral-7B-v2k' # Load fine-tuned model bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=False, ) model_pretrained = AutoModelForCausalLM.from_pretrained( finetuned_model, load_in_4bit=True, quantization_config=bnb_config, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True ) # Load tokenizer tokenizer = AutoTokenizer.from_pretrained(finetuned_model, trust_remote_code=True) # Set pad_token_id to eos_token_id model_pretrained.config.pad_token_id = tokenizer.eos_token_id pipe = pipeline(task="text-generation", model=model_pretrained, tokenizer=tokenizer, max_length=100) def build_prompt(question): prompt=f"[INST]@Enlighten. {question} [/INST]" return prompt question = "What does abutment of the nerve root mean?" prompt = build_prompt(question) # Generate text based on the prompt result = pipe(prompt)[0] generated_text = result['generated_text'] # Remove the prompt from the generated text generated_text = generated_text.replace(prompt, "", 1).strip() print(generated_text) ``` you will get somethinng like ``` Please help. For more information consult an internal medicine physician online ➜ http://iclinic.com/e/gastroenterologist-online-consultation.php. ``` also you can ```python def ask(question): promptEnding = "[/INST]" # Guide for answering questions testGuide = 'Answer the following question, at the end of your response say thank you for your query.\n' # Build the question prompt question = testGuide + question + "\n" print(question) # Build the prompt prompt = build_prompt(question) # Generate answer result = pipe(prompt) llmAnswer = result[0]['generated_text'] # Remove the prompt from the generated answer index = llmAnswer.find(promptEnding) llmAnswer = llmAnswer[len(promptEnding) + index:] print("LLM Answer:") print(llmAnswer) question = "For how long should I take Kalachikai powder to overcome PCOD problem?" ask(question) ``` ## Training Data - **Dataset Name:** AI Medical Chatbot - **Dataset URL:** https://huggingface.co/datasets/ruslanmv/ai-medical-chatbot - **Dataset Size:** 250k records - **Subset Used:** 2.0k records ## Limitations The model's performance may vary depending on the complexity and specificity of the medical questions. The model may not provide accurate answers for every medical query, and users should consult medical professionals for critical healthcare concerns. ## Ethical Considerations Users should be informed that the model's responses are generated based on patterns in the training data and may not always be accurate or suitable for medical decision-making. The model should not be used as a replacement for professional medical advice or diagnosis. Sensitive patient data should not be shared with the model, and user privacy should be protected.
{"language": ["en"], "license": "apache-2.0", "library_name": "peft", "tags": ["text-generation-inference", "transformers", "ruslanmv", "mistral", "trl"], "datasets": ["ruslanmv/ai-medical-chatbot"], "base_model": "mistralai/Mistral-7B-Instruct-v0.2"}
ruslanmv/Medical-Mixtral-7B-v2k
null
[ "peft", "safetensors", "text-generation-inference", "transformers", "ruslanmv", "mistral", "trl", "en", "dataset:ruslanmv/ai-medical-chatbot", "base_model:mistralai/Mistral-7B-Instruct-v0.2", "license:apache-2.0", "region:us" ]
null
2024-04-17T16:41:18+00:00
[]
[ "en" ]
TAGS #peft #safetensors #text-generation-inference #transformers #ruslanmv #mistral #trl #en #dataset-ruslanmv/ai-medical-chatbot #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us
# Medical-Mixtral-7B-v2k ![](URL ## Description Fine-tuned Mixtral model for answering medical assistance questions. This model is a novel version of mistralai/Mistral-7B-Instruct-v0.2, adapted to a subset of 2.0k records from the AI Medical Chatbot dataset, which contains 250k records (URL The purpose of this model is to provide a ready chatbot to answer questions related to medical assistance. ## Intended Use This model is intended for providing assistance and answering questions related to medical inquiries. It is suitable for use in chatbot applications where users seek medical advice, information, or assistance. ## Installation ## Example Usage you will get somethinng like also you can ## Training Data - Dataset Name: AI Medical Chatbot - Dataset URL: URL - Dataset Size: 250k records - Subset Used: 2.0k records ## Limitations The model's performance may vary depending on the complexity and specificity of the medical questions. The model may not provide accurate answers for every medical query, and users should consult medical professionals for critical healthcare concerns. ## Ethical Considerations Users should be informed that the model's responses are generated based on patterns in the training data and may not always be accurate or suitable for medical decision-making. The model should not be used as a replacement for professional medical advice or diagnosis. Sensitive patient data should not be shared with the model, and user privacy should be protected.
[ "# Medical-Mixtral-7B-v2k\n![](URL", "## Description\nFine-tuned Mixtral model for answering medical assistance questions. This model is a novel version of mistralai/Mistral-7B-Instruct-v0.2, adapted to a subset of 2.0k records from the AI Medical Chatbot dataset, which contains 250k records (URL The purpose of this model is to provide a ready chatbot to answer questions related to medical assistance.", "## Intended Use\nThis model is intended for providing assistance and answering questions related to medical inquiries. It is suitable for use in chatbot applications where users seek medical advice, information, or assistance.", "## Installation", "## Example Usage\n\nyou will get somethinng like\n\n\n\n\nalso you can", "## Training Data\n- Dataset Name: AI Medical Chatbot\n- Dataset URL: URL\n- Dataset Size: 250k records\n- Subset Used: 2.0k records", "## Limitations\nThe model's performance may vary depending on the complexity and specificity of the medical questions.\nThe model may not provide accurate answers for every medical query, and users should consult medical professionals for critical healthcare concerns.", "## Ethical Considerations\nUsers should be informed that the model's responses are generated based on patterns in the training data and may not always be accurate or suitable for medical decision-making.\nThe model should not be used as a replacement for professional medical advice or diagnosis.\nSensitive patient data should not be shared with the model, and user privacy should be protected." ]
[ "TAGS\n#peft #safetensors #text-generation-inference #transformers #ruslanmv #mistral #trl #en #dataset-ruslanmv/ai-medical-chatbot #base_model-mistralai/Mistral-7B-Instruct-v0.2 #license-apache-2.0 #region-us \n", "# Medical-Mixtral-7B-v2k\n![](URL", "## Description\nFine-tuned Mixtral model for answering medical assistance questions. This model is a novel version of mistralai/Mistral-7B-Instruct-v0.2, adapted to a subset of 2.0k records from the AI Medical Chatbot dataset, which contains 250k records (URL The purpose of this model is to provide a ready chatbot to answer questions related to medical assistance.", "## Intended Use\nThis model is intended for providing assistance and answering questions related to medical inquiries. It is suitable for use in chatbot applications where users seek medical advice, information, or assistance.", "## Installation", "## Example Usage\n\nyou will get somethinng like\n\n\n\n\nalso you can", "## Training Data\n- Dataset Name: AI Medical Chatbot\n- Dataset URL: URL\n- Dataset Size: 250k records\n- Subset Used: 2.0k records", "## Limitations\nThe model's performance may vary depending on the complexity and specificity of the medical questions.\nThe model may not provide accurate answers for every medical query, and users should consult medical professionals for critical healthcare concerns.", "## Ethical Considerations\nUsers should be informed that the model's responses are generated based on patterns in the training data and may not always be accurate or suitable for medical decision-making.\nThe model should not be used as a replacement for professional medical advice or diagnosis.\nSensitive patient data should not be shared with the model, and user privacy should be protected." ]
sentence-similarity
sentence-transformers
# mteb-pt/average_pt_nilc_word2vec_skip_s300 This is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model. The original pre-trained word embeddings can be found at: [http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc](http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc). This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('mteb-pt/average_pt_nilc_word2vec_skip_s300') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(929607, 300) ) (1): Pooling({'word_embedding_dimension': 300, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors ```bibtex @inproceedings{hartmann2017portuguese, title = {Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks}, author = {Hartmann, Nathan S and Fonseca, Erick R and Shulby, Christopher D and Treviso, Marcos V and Rodrigues, J{'{e}}ssica S and Alu{'{\i}}sio, Sandra Maria}, year = {2017}, publisher = {SBC}, booktitle = {Brazilian Symposium in Information and Human Language Technology - STIL}, url = {https://sol.sbc.org.br/index.php/stil/article/view/4008} } ```
{"language": ["pt"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
pt-mteb/average_pt_nilc_word2vec_skip_s300
null
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "pt", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:41:44+00:00
[]
[ "pt" ]
TAGS #sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us
# mteb-pt/average_pt_nilc_word2vec_skip_s300 This is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model. The original pre-trained word embeddings can be found at: URL This model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard ## Full Model Architecture ## Citing & Authors
[ "# mteb-pt/average_pt_nilc_word2vec_skip_s300\n\nThis is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us \n", "# mteb-pt/average_pt_nilc_word2vec_skip_s300\n\nThis is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 300 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard", "## Full Model Architecture", "## Citing & Authors" ]
null
peft
## Training procedure ### Framework versions - PEFT 0.4.0
{"library_name": "peft"}
Laugoon/squad-bloom-3b
null
[ "peft", "safetensors", "region:us" ]
null
2024-04-17T16:42:08+00:00
[]
[]
TAGS #peft #safetensors #region-us
## Training procedure ### Framework versions - PEFT 0.4.0
[ "## Training procedure", "### Framework versions\n\n\n- PEFT 0.4.0" ]
[ "TAGS\n#peft #safetensors #region-us \n", "## Training procedure", "### Framework versions\n\n\n- PEFT 0.4.0" ]
sentence-similarity
sentence-transformers
# mteb-pt/average_pt_nilc_word2vec_skip_s50 This is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a [sentence-transformers](https://www.SBERT.net) model. The original pre-trained word embeddings can be found at: [http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc](http://nilc.icmc.usp.br/nilc/index.php/repositorio-de-word-embeddings-do-nilc). This model maps sentences & paragraphs to a 50 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('mteb-pt/average_pt_nilc_word2vec_skip_s50') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: [mteb-pt/leaderboard](https://huggingface.co/spaces/mteb-pt/leaderboard) ## Full Model Architecture ``` SentenceTransformer( (0): WordEmbeddings( (emb_layer): Embedding(929607, 50) ) (1): Pooling({'word_embedding_dimension': 50, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Citing & Authors ```bibtex @inproceedings{hartmann2017portuguese, title = {Portuguese Word Embeddings: Evaluating on Word Analogies and Natural Language Tasks}, author = {Hartmann, Nathan S and Fonseca, Erick R and Shulby, Christopher D and Treviso, Marcos V and Rodrigues, J{'{e}}ssica S and Alu{'{\i}}sio, Sandra Maria}, year = {2017}, publisher = {SBC}, booktitle = {Brazilian Symposium in Information and Human Language Technology - STIL}, url = {https://sol.sbc.org.br/index.php/stil/article/view/4008} } ```
{"language": ["pt"], "library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity"], "pipeline_tag": "sentence-similarity"}
pt-mteb/average_pt_nilc_word2vec_skip_s50
null
[ "sentence-transformers", "feature-extraction", "sentence-similarity", "pt", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:42:43+00:00
[]
[ "pt" ]
TAGS #sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us
# mteb-pt/average_pt_nilc_word2vec_skip_s50 This is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model. The original pre-trained word embeddings can be found at: URL This model maps sentences & paragraphs to a 50 dimensional dense vector space and can be used for tasks like clustering or semantic search. ## Usage (Sentence-Transformers) Using this model becomes easy when you have sentence-transformers installed: Then you can use the model like this: ## Evaluation Results For an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard ## Full Model Architecture ## Citing & Authors
[ "# mteb-pt/average_pt_nilc_word2vec_skip_s50\n\nThis is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 50 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard", "## Full Model Architecture", "## Citing & Authors" ]
[ "TAGS\n#sentence-transformers #feature-extraction #sentence-similarity #pt #endpoints_compatible #region-us \n", "# mteb-pt/average_pt_nilc_word2vec_skip_s50\n\nThis is an adaptation of pre-trained Portuguese Word2Vec Word Embeddings to a sentence-transformers model. \n\nThe original pre-trained word embeddings can be found at: URL \n\nThis model maps sentences & paragraphs to a 50 dimensional dense vector space and can be used for tasks like clustering or semantic search.", "## Usage (Sentence-Transformers)\n\nUsing this model becomes easy when you have sentence-transformers installed:\n\n\n\nThen you can use the model like this:", "## Evaluation Results\n\nFor an automated evaluation of this model, see the *Portuguese MTEB Leaderboard*: mteb-pt/leaderboard", "## Full Model Architecture", "## Citing & Authors" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/CultriX/NeuralShadow-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/NeuralShadow-7B-GGUF/resolve/main/NeuralShadow-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": "CultriX/NeuralShadow-7B", "quantized_by": "mradermacher"}
mradermacher/NeuralShadow-7B-GGUF
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:CultriX/NeuralShadow-7B", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:43:04+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mergekit #merge #en #base_model-CultriX/NeuralShadow-7B #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #mergekit #merge #en #base_model-CultriX/NeuralShadow-7B #endpoints_compatible #region-us \n" ]
fill-mask
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # github_cybersecurity_READMEs This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4702 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.4248 | 1.0 | 12012 | 2.6210 | | 2.2862 | 2.0 | 24024 | 2.5200 | | 2.2091 | 3.0 | 36036 | 2.4696 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilroberta-base", "model-index": [{"name": "github_cybersecurity_READMEs", "results": []}]}
Tiffany0313/github_cybersecurity_READMEs
null
[ "transformers", "tensorboard", "safetensors", "roberta", "fill-mask", "generated_from_trainer", "base_model:distilroberta-base", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:44:01+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-distilroberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
github\_cybersecurity\_READMEs ============================== This model is a fine-tuned version of distilroberta-base on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.4702 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * num\_epochs: 3.0 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-distilroberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["trl", "sft"]}
aminlouhichi/gemma_text_tosqlV2
null
[ "transformers", "safetensors", "trl", "sft", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:44:22+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #trl #sft #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #trl #sft #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multilingual-e5-large-guardrail-unknown_task-classifier-training This model is a fine-tuned version of [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "intfloat/multilingual-e5-large", "model-index": [{"name": "multilingual-e5-large-guardrail-unknown_task-classifier-training", "results": []}]}
tosh97/multilingual-e5-large-guardrail-unknown_task-classifier-training
null
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:intfloat/multilingual-e5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:44:30+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #text-classification #generated_from_trainer #base_model-intfloat/multilingual-e5-large #license-mit #autotrain_compatible #endpoints_compatible #region-us
# multilingual-e5-large-guardrail-unknown_task-classifier-training This model is a fine-tuned version of intfloat/multilingual-e5-large on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
[ "# multilingual-e5-large-guardrail-unknown_task-classifier-training\n\nThis model is a fine-tuned version of intfloat/multilingual-e5-large on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #text-classification #generated_from_trainer #base_model-intfloat/multilingual-e5-large #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# multilingual-e5-large-guardrail-unknown_task-classifier-training\n\nThis model is a fine-tuned version of intfloat/multilingual-e5-large on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amazon_helpfulness_classification_on_base_from_DAPT_5M_pretraining This model is a fine-tuned version of [BigTMiami/amazon_pretraining_5M_model_corrected](https://huggingface.co/BigTMiami/amazon_pretraining_5M_model_corrected) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7517 - Accuracy: 0.8699 - F1 Macro: 0.6736 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.3151 | 1.0 | 7204 | 0.3461 | 0.8766 | 0.6664 | | 0.2861 | 2.0 | 14408 | 0.3429 | 0.8736 | 0.6544 | | 0.2788 | 3.0 | 21612 | 0.3600 | 0.8722 | 0.6466 | | 0.2585 | 4.0 | 28816 | 0.3805 | 0.8682 | 0.6789 | | 0.1873 | 5.0 | 36020 | 0.5306 | 0.871 | 0.6660 | | 0.1333 | 6.0 | 43224 | 0.6493 | 0.8674 | 0.6675 | | 0.1369 | 7.0 | 50428 | 0.7657 | 0.869 | 0.6799 | | 0.0936 | 8.0 | 57632 | 0.8041 | 0.8674 | 0.6779 | | 0.1062 | 9.0 | 64836 | 0.9458 | 0.867 | 0.6633 | | 0.0463 | 10.0 | 72040 | 1.0079 | 0.8682 | 0.6684 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "BigTMiami/amazon_pretraining_5M_model_corrected", "model-index": [{"name": "amazon_helpfulness_classification_on_base_from_DAPT_5M_pretraining", "results": []}]}
BigTMiami/amazon_helpfulness_classification_on_base_from_DAPT_5M_pretraining
null
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:BigTMiami/amazon_pretraining_5M_model_corrected", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:45:26+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-BigTMiami/amazon_pretraining_5M_model_corrected #license-mit #autotrain_compatible #endpoints_compatible #region-us
amazon\_helpfulness\_classification\_on\_base\_from\_DAPT\_5M\_pretraining ========================================================================== This model is a fine-tuned version of BigTMiami/amazon\_pretraining\_5M\_model\_corrected on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.7517 * Accuracy: 0.8699 * F1 Macro: 0.6736 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.06 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-BigTMiami/amazon_pretraining_5M_model_corrected #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
text-generation
transformers
<img src="https://allenai.org/olmo/olmo-7b-animation.gif" alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> # Model Card for OLMo 1.7-7B-hf OLMo 1.7 7B is the latest version of the original [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) model rocking a 24 point increase in MMLU, among other evaluations improvements, from an improved version of the Dolma dataset and staged training. **This version is for direct use with HuggingFace Transformers** from v4.40 on. OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models. The OLMo models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset. We release all code, checkpoints, logs, and details involved in training these models. ## Model Details The core models released in this batch are the following: | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length | |------|--------|---------|-------------|-----------------|----------------| | [OLMo 1B](https://huggingface.co/allenai/OLMo-1B) | 3 Trillion |16 | 2048 | 16 | 2048 | | [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) | 2.5 Trillion | 32 | 4096 | 32 | 2048 | | [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T) | 2 Trillion | 32 | 4096 | 32 | 2048 | | [OLMo 1.7-7B](https://huggingface.co/allenai/OLMo-1.7-7B) | 2.05 Trillion | 32 | 4096 | 32 | 4096 | *Note: OLMo 1.7-7B also includes QKV clipping.* [Coming soon] We are releasing many checkpoints for these models, for every 1000 training steps. The naming convention is `step1000-tokens4B`. To load a specific model revision with HuggingFace, simply add the argument `revision`: ```bash import hf_olmo # pip install ai2-olmo olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1.7-7B-hf", revision="step1000-tokens4B") ``` All revisions/branches are listed in the file `revisions.txt`. Or, you can access all the revisions for the models via the following code snippet: ```python from huggingface_hub import list_repo_refs out = list_repo_refs("allenai/OLMo-1.7-7B-hf") branches = [b.name for b in out.branches] ``` A few revisions were lost due to an error, but the vast majority are present. ### Model Description - **Developed by:** Allen Institute for AI (AI2) - **Supported by:** Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW - **Model type:** a Transformer style autoregressive language model. - **Language(s) (NLP):** English - **License:** The code and model are released under Apache 2.0. - **Contact:** Technical inquiries: `olmo at allenai dot org`. Press: `press at allenai dot org` - **Date cutoff:** Oct. 2023, with most data from Feb./March 2023 based on Dolma dataset version. ### Model Sources - **Project Page:** https://allenai.org/olmo - **Repositories:** - Core repo (training, inference, fine-tuning etc.): https://github.com/allenai/OLMo - Evaluation code: https://github.com/allenai/OLMo-Eval - Further fine-tuning code: https://github.com/allenai/open-instruct - **Paper:** [Link](https://arxiv.org/abs/2402.00838) - **Technical blog post:** https://blog.allenai.org/olmo-1-7-7b-a-24-point-improvement-on-mmlu-92b43f7d269d - **W&B Logs:** [pretraining](https://wandb.ai/ai2-llm/OLMo-7B/groups/OLMo-1.7-7B), [annealing](https://wandb.ai/ai2-llm/OLMo-7B/groups/OLMo-1.7-7B-anneal) <!-- - **Press release:** TODO --> ## Uses ### Inference Install Transformers [from source](https://huggingface.co/docs/transformers/en/installation#install-from-source), or update to the next version when this [PR](https://github.com/huggingface/transformers/pull/29890) is integrated. Now, proceed as usual with HuggingFace: ```python from transformers import AutoModelForCausalLM, AutoTokenizer olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1.7-7B-hf") tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1.7-7B-hf") message = ["Language modeling is "] inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False) # optional verifying cuda # inputs = {k: v.to('cuda') for k,v in inputs.items()} # olmo = olmo.to('cuda') response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) print(tokenizer.batch_decode(response, skip_special_tokens=True)[0]) >> 'Language modeling is the first step to build natural language generation...' ``` Alternatively, with the pipeline abstraction: ```python from transformers import pipeline olmo_pipe = pipeline("text-generation", model="allenai/OLMo-1.7-7B-hf") print(olmo_pipe("Language modeling is ")) >> 'Language modeling is a branch of natural language processing that aims to...' ``` Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-1.7-7B-hf", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`). The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues. Note, you may see the following error if `ai2-olmo` is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer. ```bash raise ImportError( ImportError: This modeling file requires the following packages that were not found in your environment: hf_olmo. Run `pip install hf_olmo` ``` ### Fine-tuning Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or many intermediate checkpoints. Two recipes for tuning are available. 1. Fine-tune with the OLMo repository: ```bash torchrun --nproc_per_node=8 scripts/train.py {path_to_train_config} \ --data.paths=[{path_to_data}/input_ids.npy] \ --data.label_mask_paths=[{path_to_data}/label_mask.npy] \ --load_path={path_to_checkpoint} \ --reset_trainer_state ``` For more documentation, see the [GitHub readme](https://github.com/allenai/OLMo?tab=readme-ov-file#fine-tuning). 2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are [here](https://github.com/allenai/open-instruct). ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> Core model results for the new and original 7B model are found below. | Task | Llama-7b | Llama2-7b | Falcon-7b | Mpt-7b | OLMo-7B | Llama2-13b | **OLMo 1.7-7B** | |-------------------|----------|-----------|-----------|--------|---------|------------|-------------| | arc_c | 44.5 | 48.5 | 47.5 | 46.5 | 48.5 | 52.8 | 42.5 | | arc_e | 67.9 | 69.5 | 70.4 | 70.5 | 65.4 | 73.7 | 67.2 | | boolq | 75.4 | 80.2 | 74.6 | 74.2 | 73.4 | 82.2 | 83.7 | | copa | 91.0 | 86.0 | 86.0 | 85.0 | 90.0 | 90.0 | 86.0 | | hellaswag | 76.2 | 76.8 | 75.9 | 77.6 | 76.4 | 78.6 | 75.5 | | openbookqa | 51.2 | 48.4 | 53.0 | 48.6 | 50.4 | 51.8 | 50.0 | | piqa | 77.2 | 76.7 | 78.5 | 77.3 | 78.4 | 79.0 | 77.5 | | sciq | 93.9 | 94.5 | 93.9 | 93.7 | 93.8 | 95.5 | 96.7 | | winogrande | 70.5 | 69.4 | 68.9 | 69.9 | 67.9 | 73.5 | 69.8 | | truthfulQA (MC2) | 33.9 | 38.5 | 34.0 | 33.0 | 36.0 | 36.8 | 35.8 | | MMLU (5 shot MC) | 31.5 | 45.0 | 24.0 | 30.8 | 28.3 | 55.5 | 52.0 | | GSM8k | 10.0 | 12.0 | 4.0 | 4.5 | 8.5 | 25.0 | 29.0 | | Full average | 60.3 | 62.1 | 59.2 | 59.3 | 59.8 | 66.2 | 63.8 | And for the 1B model: | task | random | [StableLM 2 1.6b](https://huggingface.co/stabilityai/stablelm-2-1_6b)\* | [Pythia 1B](https://huggingface.co/EleutherAI/pythia-1b) | [TinyLlama 1.1B](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) | **OLMo 1B** (ours) | | ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------ | ----------------- | --------- | -------------------------------------- | ------- | | arc_challenge | 25 | 43.81 | 33.11 | 34.78 | 34.45 | | arc_easy | 25 | 63.68 | 50.18 | 53.16 | 58.07 | | boolq | 50 | 76.6 | 61.8 | 64.6 | 60.7 | | copa | 50 | 84 | 72 | 78 | 79 | | hellaswag | 25 | 68.2 | 44.7 | 58.7 | 62.5 | | openbookqa | 25 | 45.8 | 37.8 | 43.6 | 46.4 | | piqa | 50 | 74 | 69.1 | 71.1 | 73.7 | | sciq | 25 | 94.7 | 86 | 90.5 | 88.1 | | winogrande | 50 | 64.9 | 53.3 | 58.9 | 58.9 | | Average | 36.11 | 68.41 | 56.44 | 61.48 | 62.42 | \*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging. ## Model Details ### Data For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma) documentation. **This model uses the new 1.7 version with more data sources, better deduplication, and quality filtering**. During the annealing phase we use a higher quality subset of Dolma with a linearly decaying learning rate to 0. ### Staged training / annealing In contrast to OLMo 1.0, we trained OLMo 1.7 with a two-stage curriculum: * In the first stage, we trained the model from scratch on the Dolma 1.7 dataset. We set a cosine learning rate schedule with a warmup of 2500 steps, a peak learning rate of 3e-4, and a cosine decay to 3e-5 after 3T tokens. We cut off this stage after 2T tokens, when the learning rate is still high. * At this point we switch to the second stage, in which we train on a higher-quality subset of Dolma 1.7 (see below) for another 50B tokens, while linearly decaying the learning rate to 0. Our high-quality subset includes (1) using all available Wikipedia, OpenWebMath and Flan data, (2) removing Dolma CC, CC News, and Megawika, and (3) rebalancing remaining sources to achieve approximately equal proportions of each. See exact token counts and relative proportions of this second stage mix below. Both stages contribute equally to the final performance of the OLMo model. After the first stage, OLMo 1.7 already outperforms OLMo 1.0. The second stage consistently adds 2 to 3 points of performance on top. ### Architecture OLMo 7B architecture with peer models for comparison. | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | PaLM 8B | |------------------------|-------------------|---------------------|--------------------|--------------------|------------------| | d_model | 4096 | 4096 | 4096 | 4544 | 4096 | | num heads | 32 | 32 | 32 | 71 | 16 | | num layers | 32 | 32 | 32 | 32 | 32 | | MLP ratio | ~8/3 | ~8/3 | ~8/3 | 4 | 4 | | LayerNorm type | non-parametric LN | RMSNorm | parametric LN | parametric LN | parametric LN | | pos embeddings | RoPE | RoPE | RoPE | RoPE | RoPE | | attention variant | full | GQA | full | MQA | MQA | | biases | none | none | in LN only | in LN only | none | | block type | sequential | sequential | sequential | parallel | parallel | | activation | SwiGLU | SwiGLU | SwiGLU | GeLU | SwiGLU | | sequence length | 2048 | 4096 | 2048 | 2048 | 2048 | | batch size (instances) | 2160 | 1024 | 2048 | 2304 | 512 | | batch size (tokens) | ~4M | ~4M | ~4M | ~4M | ~1M | | weight tying | no | no | no | no | yes | ### Hyperparameters AdamW optimizer parameters are shown below. | Size | Peak LR | Betas | Epsilon | Weight Decay | |------|------------|-----------------|-------------|--------------| | 1B | 4.0E-4 | (0.9, 0.95) | 1.0E-5 | 0.1 | | 7B | 3.0E-4 | (0.9, 0.99) | 1.0E-5 | 0.1 | Optimizer settings comparison with peer models. | | **OLMo 7B** | [Llama 2 7B](https://huggingface.co/meta-llama/Llama-2-7b) | [OpenLM 7B](https://laion.ai/blog/open-lm/) | [Falcon 7B](https://huggingface.co/tiiuae/falcon-7b) | |-----------------------|------------------|---------------------|--------------------|--------------------| | warmup steps | 5000 | 2000 | 2000 | 1000 | | peak LR | 3.0E-04 | 3.0E-04 | 3.0E-04 | 6.0E-04 | | minimum LR | 3.0E-05 | 3.0E-05 | 3.0E-05 | 1.2E-05 | | weight decay | 0.1 | 0.1 | 0.1 | 0.1 | | beta1 | 0.9 | 0.9 | 0.9 | 0.99 | | beta2 | 0.95 | 0.95 | 0.95 | 0.999 | | epsilon | 1.0E-05 | 1.0E-05 | 1.0E-05 | 1.0E-05 | | LR schedule | linear | cosine | cosine | cosine | | gradient clipping | global 1.0 | global 1.0 | global 1.0 | global 1.0 | | gradient reduce dtype | FP32 | FP32 | FP32 | BF16 | | optimizer state dtype | FP32 | most likely FP32 | FP32 | FP32 | ## Environmental Impact OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML. A summary of the environmental impact. Further details are available in the paper. | | GPU Type | Power Consumption From GPUs | Carbon Intensity (kg CO₂e/KWh) | Carbon Emissions (tCO₂eq) | |-----------|------------|-----------------------------|--------------------------------|---------------------------| | OLMo 7B Twin | MI250X ([LUMI supercomputer](https://www.lumi-supercomputer.eu)) | 135 MWh | 0* | 0* | | OLMo 7B | A100-40GB ([MosaicML](https://www.mosaicml.com)) | 104 MWh | 0.656 | 75.05 | ## Bias, Risks, and Limitations Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content. Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology. Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked. ## Citation **BibTeX:** ``` @article{Groeneveld2023OLMo, title={OLMo: Accelerating the Science of Language Models}, author={Groeneveld, Dirk and Beltagy, Iz and Walsh, Pete and Bhagia, Akshita and Kinney, Rodney and Tafjord, Oyvind and Jha, Ananya Harsh and Ivison, Hamish and Magnusson, Ian and Wang, Yizhong and Arora, Shane and Atkinson, David and Authur, Russell and Chandu, Khyathi and Cohan, Arman and Dumas, Jennifer and Elazar, Yanai and Gu, Yuling and Hessel, Jack and Khot, Tushar and Merrill, William and Morrison, Jacob and Muennighoff, Niklas and Naik, Aakanksha and Nam, Crystal and Peters, Matthew E. and Pyatkin, Valentina and Ravichander, Abhilasha and Schwenk, Dustin and Shah, Saurabh and Smith, Will and Subramani, Nishant and Wortsman, Mitchell and Dasigi, Pradeep and Lambert, Nathan and Richardson, Kyle and Dodge, Jesse and Lo, Kyle and Soldaini, Luca and Smith, Noah A. and Hajishirzi, Hannaneh}, journal={Preprint}, year={2024} } ``` **APA:** Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint. ## Model Card Contact For errors in this model card, contact Nathan, `{nathanl} at allenai dot org`.
{"language": ["en"], "license": "apache-2.0", "datasets": ["allenai/dolma"]}
allenai/OLMo-1.7-7B-hf
null
[ "transformers", "safetensors", "olmo", "text-generation", "en", "dataset:allenai/dolma", "arxiv:2402.00838", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:46:55+00:00
[ "2402.00838" ]
[ "en" ]
TAGS #transformers #safetensors #olmo #text-generation #en #dataset-allenai/dolma #arxiv-2402.00838 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
<img src="URL alt="OLMo Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Model Card for OLMo 1.7-7B-hf ============================= OLMo 1.7 7B is the latest version of the original OLMo 7B model rocking a 24 point increase in MMLU, among other evaluations improvements, from an improved version of the Dolma dataset and staged training. This version is for direct use with HuggingFace Transformers from v4.40 on. OLMo is a series of Open Language Models designed to enable the science of language models. The OLMo models are trained on the Dolma dataset. We release all code, checkpoints, logs, and details involved in training these models. Model Details ------------- The core models released in this batch are the following: *Note: OLMo 1.7-7B also includes QKV clipping.* [Coming soon] We are releasing many checkpoints for these models, for every 1000 training steps. The naming convention is 'step1000-tokens4B'. To load a specific model revision with HuggingFace, simply add the argument 'revision': All revisions/branches are listed in the file 'URL'. Or, you can access all the revisions for the models via the following code snippet: A few revisions were lost due to an error, but the vast majority are present. ### Model Description * Developed by: Allen Institute for AI (AI2) * Supported by: Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW * Model type: a Transformer style autoregressive language model. * Language(s) (NLP): English * License: The code and model are released under Apache 2.0. * Contact: Technical inquiries: 'olmo at allenai dot org'. Press: 'press at allenai dot org' * Date cutoff: Oct. 2023, with most data from Feb./March 2023 based on Dolma dataset version. ### Model Sources * Project Page: URL * Repositories: + Core repo (training, inference, fine-tuning etc.): URL + Evaluation code: URL + Further fine-tuning code: URL * Paper: Link * Technical blog post: URL * W&B Logs: pretraining, annealing Uses ---- ### Inference Install Transformers from source, or update to the next version when this PR is integrated. Now, proceed as usual with HuggingFace: Alternatively, with the pipeline abstraction: Or, you can make this slightly faster by quantizing the model, e.g. 'AutoModelForCausalLM.from\_pretrained("allenai/OLMo-1.7-7B-hf", torch\_dtype=torch.float16, load\_in\_8bit=True)' (requires 'bitsandbytes'). The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as 'inputs.input\_ids.to('cuda')' to avoid potential issues. Note, you may see the following error if 'ai2-olmo' is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer. ### Fine-tuning Model fine-tuning can be done from the final checkpoint (the 'main' revision of this model) or many intermediate checkpoints. Two recipes for tuning are available. 1. Fine-tune with the OLMo repository: For more documentation, see the GitHub readme. 2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are here. Evaluation ---------- Core model results for the new and original 7B model are found below. And for the 1B model: \*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging. Model Details ------------- ### Data For training data details, please see the Dolma documentation. This model uses the new 1.7 version with more data sources, better deduplication, and quality filtering. During the annealing phase we use a higher quality subset of Dolma with a linearly decaying learning rate to 0. ### Staged training / annealing In contrast to OLMo 1.0, we trained OLMo 1.7 with a two-stage curriculum: * In the first stage, we trained the model from scratch on the Dolma 1.7 dataset. We set a cosine learning rate schedule with a warmup of 2500 steps, a peak learning rate of 3e-4, and a cosine decay to 3e-5 after 3T tokens. We cut off this stage after 2T tokens, when the learning rate is still high. * At this point we switch to the second stage, in which we train on a higher-quality subset of Dolma 1.7 (see below) for another 50B tokens, while linearly decaying the learning rate to 0. Our high-quality subset includes (1) using all available Wikipedia, OpenWebMath and Flan data, (2) removing Dolma CC, CC News, and Megawika, and (3) rebalancing remaining sources to achieve approximately equal proportions of each. See exact token counts and relative proportions of this second stage mix below. Both stages contribute equally to the final performance of the OLMo model. After the first stage, OLMo 1.7 already outperforms OLMo 1.0. The second stage consistently adds 2 to 3 points of performance on top. ### Architecture OLMo 7B architecture with peer models for comparison. ### Hyperparameters AdamW optimizer parameters are shown below. Optimizer settings comparison with peer models. Environmental Impact -------------------- OLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML. A summary of the environmental impact. Further details are available in the paper. Bias, Risks, and Limitations ---------------------------- Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content. Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology. Otherwise, many facts from OLMo or any LLM will often not be true, so they should be checked. BibTeX: APA: Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint. Model Card Contact ------------------ For errors in this model card, contact Nathan, '{nathanl} at allenai dot org'.
[ "### Model Description\n\n\n* Developed by: Allen Institute for AI (AI2)\n* Supported by: Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW\n* Model type: a Transformer style autoregressive language model.\n* Language(s) (NLP): English\n* License: The code and model are released under Apache 2.0.\n* Contact: Technical inquiries: 'olmo at allenai dot org'. Press: 'press at allenai dot org'\n* Date cutoff: Oct. 2023, with most data from Feb./March 2023 based on Dolma dataset version.", "### Model Sources\n\n\n* Project Page: URL\n* Repositories:\n\t+ Core repo (training, inference, fine-tuning etc.): URL\n\t+ Evaluation code: URL\n\t+ Further fine-tuning code: URL\n* Paper: Link\n* Technical blog post: URL\n* W&B Logs: pretraining, annealing\n\n\nUses\n----", "### Inference\n\n\nInstall Transformers from source, or update to the next version when this PR is integrated.\n\n\nNow, proceed as usual with HuggingFace:\n\n\nAlternatively, with the pipeline abstraction:\n\n\nOr, you can make this slightly faster by quantizing the model, e.g. 'AutoModelForCausalLM.from\\_pretrained(\"allenai/OLMo-1.7-7B-hf\", torch\\_dtype=torch.float16, load\\_in\\_8bit=True)' (requires 'bitsandbytes').\nThe quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as 'inputs.input\\_ids.to('cuda')' to avoid potential issues.\n\n\nNote, you may see the following error if 'ai2-olmo' is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.", "### Fine-tuning\n\n\nModel fine-tuning can be done from the final checkpoint (the 'main' revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.\n\n\n1. Fine-tune with the OLMo repository:\n\n\nFor more documentation, see the GitHub readme.\n\n\n2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are here.\n\n\nEvaluation\n----------\n\n\nCore model results for the new and original 7B model are found below.\n\n\n\nAnd for the 1B model:\n\n\n\n\\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.\n\n\nModel Details\n-------------", "### Data\n\n\nFor training data details, please see the Dolma documentation.\nThis model uses the new 1.7 version with more data sources, better deduplication, and quality filtering.\nDuring the annealing phase we use a higher quality subset of Dolma with a linearly decaying learning rate to 0.", "### Staged training / annealing\n\n\nIn contrast to OLMo 1.0, we trained OLMo 1.7 with a two-stage curriculum:\n\n\n* In the first stage, we trained the model from scratch on the Dolma 1.7 dataset. We set a cosine learning rate schedule with a warmup of 2500 steps, a peak learning rate of 3e-4, and a cosine decay to 3e-5 after 3T tokens. We cut off this stage after 2T tokens, when the learning rate is still high.\n* At this point we switch to the second stage, in which we train on a higher-quality subset of Dolma 1.7 (see below) for another 50B tokens, while linearly decaying the learning rate to 0. Our high-quality subset includes (1) using all available Wikipedia, OpenWebMath and Flan data, (2) removing Dolma CC, CC News, and Megawika, and (3) rebalancing remaining sources to achieve approximately equal proportions of each. See exact token counts and relative proportions of this second stage mix below.\nBoth stages contribute equally to the final performance of the OLMo model. After the first stage, OLMo 1.7 already outperforms OLMo 1.0. The second stage consistently adds 2 to 3 points of performance on top.", "### Architecture\n\n\nOLMo 7B architecture with peer models for comparison.", "### Hyperparameters\n\n\nAdamW optimizer parameters are shown below.\n\n\n\nOptimizer settings comparison with peer models.\n\n\n\nEnvironmental Impact\n--------------------\n\n\nOLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.\nA summary of the environmental impact. Further details are available in the paper.\n\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nLike any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.\nSuch content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.\n\n\nOtherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.\n\n\nBibTeX:\n\n\nAPA:\n\n\nGroeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.\n\n\nModel Card Contact\n------------------\n\n\nFor errors in this model card, contact Nathan, '{nathanl} at allenai dot org'." ]
[ "TAGS\n#transformers #safetensors #olmo #text-generation #en #dataset-allenai/dolma #arxiv-2402.00838 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n", "### Model Description\n\n\n* Developed by: Allen Institute for AI (AI2)\n* Supported by: Databricks, Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, AMD, CSC (Lumi Supercomputer), UW\n* Model type: a Transformer style autoregressive language model.\n* Language(s) (NLP): English\n* License: The code and model are released under Apache 2.0.\n* Contact: Technical inquiries: 'olmo at allenai dot org'. Press: 'press at allenai dot org'\n* Date cutoff: Oct. 2023, with most data from Feb./March 2023 based on Dolma dataset version.", "### Model Sources\n\n\n* Project Page: URL\n* Repositories:\n\t+ Core repo (training, inference, fine-tuning etc.): URL\n\t+ Evaluation code: URL\n\t+ Further fine-tuning code: URL\n* Paper: Link\n* Technical blog post: URL\n* W&B Logs: pretraining, annealing\n\n\nUses\n----", "### Inference\n\n\nInstall Transformers from source, or update to the next version when this PR is integrated.\n\n\nNow, proceed as usual with HuggingFace:\n\n\nAlternatively, with the pipeline abstraction:\n\n\nOr, you can make this slightly faster by quantizing the model, e.g. 'AutoModelForCausalLM.from\\_pretrained(\"allenai/OLMo-1.7-7B-hf\", torch\\_dtype=torch.float16, load\\_in\\_8bit=True)' (requires 'bitsandbytes').\nThe quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as 'inputs.input\\_ids.to('cuda')' to avoid potential issues.\n\n\nNote, you may see the following error if 'ai2-olmo' is not installed correctly, which is caused by internal Python check naming. We'll update the code soon to make this error clearer.", "### Fine-tuning\n\n\nModel fine-tuning can be done from the final checkpoint (the 'main' revision of this model) or many intermediate checkpoints. Two recipes for tuning are available.\n\n\n1. Fine-tune with the OLMo repository:\n\n\nFor more documentation, see the GitHub readme.\n\n\n2. Further fine-tuning support is being developing in AI2's Open Instruct repository. Details are here.\n\n\nEvaluation\n----------\n\n\nCore model results for the new and original 7B model are found below.\n\n\n\nAnd for the 1B model:\n\n\n\n\\*Unlike OLMo, Pythia, and TinyLlama, StabilityAI has not disclosed yet the data StableLM was trained on, making comparisons with other efforts challenging.\n\n\nModel Details\n-------------", "### Data\n\n\nFor training data details, please see the Dolma documentation.\nThis model uses the new 1.7 version with more data sources, better deduplication, and quality filtering.\nDuring the annealing phase we use a higher quality subset of Dolma with a linearly decaying learning rate to 0.", "### Staged training / annealing\n\n\nIn contrast to OLMo 1.0, we trained OLMo 1.7 with a two-stage curriculum:\n\n\n* In the first stage, we trained the model from scratch on the Dolma 1.7 dataset. We set a cosine learning rate schedule with a warmup of 2500 steps, a peak learning rate of 3e-4, and a cosine decay to 3e-5 after 3T tokens. We cut off this stage after 2T tokens, when the learning rate is still high.\n* At this point we switch to the second stage, in which we train on a higher-quality subset of Dolma 1.7 (see below) for another 50B tokens, while linearly decaying the learning rate to 0. Our high-quality subset includes (1) using all available Wikipedia, OpenWebMath and Flan data, (2) removing Dolma CC, CC News, and Megawika, and (3) rebalancing remaining sources to achieve approximately equal proportions of each. See exact token counts and relative proportions of this second stage mix below.\nBoth stages contribute equally to the final performance of the OLMo model. After the first stage, OLMo 1.7 already outperforms OLMo 1.0. The second stage consistently adds 2 to 3 points of performance on top.", "### Architecture\n\n\nOLMo 7B architecture with peer models for comparison.", "### Hyperparameters\n\n\nAdamW optimizer parameters are shown below.\n\n\n\nOptimizer settings comparison with peer models.\n\n\n\nEnvironmental Impact\n--------------------\n\n\nOLMo 7B variants were either trained on MI250X GPUs at the LUMI supercomputer, or A100-40GB GPUs provided by MosaicML.\nA summary of the environmental impact. Further details are available in the paper.\n\n\n\nBias, Risks, and Limitations\n----------------------------\n\n\nLike any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.\nSuch content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.\n\n\nOtherwise, many facts from OLMo or any LLM will often not be true, so they should be checked.\n\n\nBibTeX:\n\n\nAPA:\n\n\nGroeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Dodge, J., Lo, K., Soldaini, L., Smith, N., & Hajishirzi, H. (2024). OLMo: Accelerating the Science of Language Models. Preprint.\n\n\nModel Card Contact\n------------------\n\n\nFor errors in this model card, contact Nathan, '{nathanl} at allenai dot org'." ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amazon_helpfulness_classification_on_base_from_TAPT_helpfulness_pretraining This model is a fine-tuned version of [BigTMiami/tapt_helpfulness_base_pretraining_model](https://huggingface.co/BigTMiami/tapt_helpfulness_base_pretraining_model) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4874 - Accuracy: 0.8724 - F1 Macro: 0.6843 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.06 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:| | 0.3156 | 1.0 | 7204 | 0.3322 | 0.8666 | 0.5814 | | 0.2841 | 2.0 | 14408 | 0.3471 | 0.8744 | 0.6461 | | 0.274 | 3.0 | 21612 | 0.3581 | 0.8704 | 0.6287 | | 0.2602 | 4.0 | 28816 | 0.3619 | 0.87 | 0.6849 | | 0.2126 | 5.0 | 36020 | 0.5168 | 0.8678 | 0.6868 | | 0.1674 | 6.0 | 43224 | 0.5960 | 0.8672 | 0.6713 | | 0.1362 | 7.0 | 50428 | 0.6970 | 0.8684 | 0.6758 | | 0.1184 | 8.0 | 57632 | 0.7500 | 0.8674 | 0.6715 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "BigTMiami/tapt_helpfulness_base_pretraining_model", "model-index": [{"name": "amazon_helpfulness_classification_on_base_from_TAPT_helpfulness_pretraining", "results": []}]}
BigTMiami/amazon_helpfulness_classification_on_base_from_TAPT_helpfulness_pretraining
null
[ "transformers", "tensorboard", "safetensors", "roberta", "text-classification", "generated_from_trainer", "base_model:BigTMiami/tapt_helpfulness_base_pretraining_model", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:48:12+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-BigTMiami/tapt_helpfulness_base_pretraining_model #license-mit #autotrain_compatible #endpoints_compatible #region-us
amazon\_helpfulness\_classification\_on\_base\_from\_TAPT\_helpfulness\_pretraining =================================================================================== This model is a fine-tuned version of BigTMiami/tapt\_helpfulness\_base\_pretraining\_model on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.4874 * Accuracy: 0.8724 * F1 Macro: 0.6843 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2e-05 * train\_batch\_size: 16 * eval\_batch\_size: 16 * seed: 42 * optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_ratio: 0.06 * num\_epochs: 10 ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #roberta #text-classification #generated_from_trainer #base_model-BigTMiami/tapt_helpfulness_base_pretraining_model #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.06\n* num\\_epochs: 10", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
peft
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.10.0
{"library_name": "peft", "base_model": "NousResearch/Llama-2-7b-chat-hf"}
Jingy2000/AITherapist-7B-v0.1
null
[ "peft", "pytorch", "llama", "arxiv:1910.09700", "base_model:NousResearch/Llama-2-7b-chat-hf", "region:us" ]
null
2024-04-17T16:48:31+00:00
[ "1910.09700" ]
[]
TAGS #peft #pytorch #llama #arxiv-1910.09700 #base_model-NousResearch/Llama-2-7b-chat-hf #region-us
# Model Card for Model ID ## Model Details ### Model Description - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact ### Framework versions - PEFT 0.10.0
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
[ "TAGS\n#peft #pytorch #llama #arxiv-1910.09700 #base_model-NousResearch/Llama-2-7b-chat-hf #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact", "### Framework versions\n\n- PEFT 0.10.0" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # model_hh_shp3_200 This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.2700 - Rewards/chosen: -2.0808 - Rewards/rejected: -2.6185 - Rewards/accuracies: 0.5300 - Rewards/margins: 0.5377 - Logps/rejected: -216.1040 - Logps/chosen: -234.7784 - Logits/rejected: -0.6432 - Logits/chosen: -0.6992 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:| | 0.0 | 8.0 | 100 | 2.2133 | -1.8646 | -2.4313 | 0.5300 | 0.5667 | -215.8960 | -234.5382 | -0.6413 | -0.6979 | | 0.0 | 16.0 | 200 | 2.2571 | -1.9454 | -2.5096 | 0.5300 | 0.5642 | -215.9830 | -234.6279 | -0.6423 | -0.6991 | | 0.0 | 24.0 | 300 | 2.2275 | -1.9722 | -2.5264 | 0.5200 | 0.5542 | -216.0016 | -234.6577 | -0.6429 | -0.6988 | | 0.0 | 32.0 | 400 | 2.2729 | -2.0276 | -2.5437 | 0.5200 | 0.5161 | -216.0209 | -234.7193 | -0.6425 | -0.6991 | | 0.0 | 40.0 | 500 | 2.2476 | -2.0622 | -2.6344 | 0.5300 | 0.5723 | -216.1217 | -234.7577 | -0.6440 | -0.7005 | | 0.0 | 48.0 | 600 | 2.2449 | -2.0779 | -2.6423 | 0.5300 | 0.5645 | -216.1305 | -234.7751 | -0.6434 | -0.6996 | | 0.0 | 56.0 | 700 | 2.2415 | -2.0486 | -2.6063 | 0.5300 | 0.5577 | -216.0904 | -234.7426 | -0.6439 | -0.7000 | | 0.0 | 64.0 | 800 | 2.2311 | -2.0778 | -2.6332 | 0.5300 | 0.5554 | -216.1204 | -234.7751 | -0.6440 | -0.7000 | | 0.0 | 72.0 | 900 | 2.2534 | -2.0857 | -2.6363 | 0.5300 | 0.5507 | -216.1238 | -234.7838 | -0.6437 | -0.6996 | | 0.0 | 80.0 | 1000 | 2.2700 | -2.0808 | -2.6185 | 0.5300 | 0.5377 | -216.1040 | -234.7784 | -0.6432 | -0.6992 | ### Framework versions - PEFT 0.10.0 - Transformers 4.39.1 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "llama2", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "model_hh_shp3_200", "results": []}]}
guoyu-zhang/model_hh_shp3_200
null
[ "peft", "safetensors", "trl", "dpo", "generated_from_trainer", "base_model:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "region:us" ]
null
2024-04-17T16:48:49+00:00
[]
[]
TAGS #peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us
model\_hh\_shp3\_200 ==================== This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 2.2700 * Rewards/chosen: -2.0808 * Rewards/rejected: -2.6185 * Rewards/accuracies: 0.5300 * Rewards/margins: 0.5377 * Logps/rejected: -216.1040 * Logps/chosen: -234.7784 * Logits/rejected: -0.6432 * Logits/chosen: -0.6992 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0005 * train\_batch\_size: 4 * eval\_batch\_size: 1 * seed: 42 * gradient\_accumulation\_steps: 4 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine * lr\_scheduler\_warmup\_steps: 100 * training\_steps: 1000 ### Training results ### Framework versions * PEFT 0.10.0 * Transformers 4.39.1 * Pytorch 2.2.1+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0005\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_steps: 100\n* training\\_steps: 1000", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.0\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlsr_hindi This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_16_1 dataset. It achieves the following results on the evaluation set: - Loss: 0.9242 - Wer : 0.4032 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 10 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 3 - total_train_batch_size: 30 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 12.3878 | 3.61 | 200 | 3.7452 | 1.0 | | 3.3396 | 7.23 | 400 | 2.7621 | 1.0 | | 1.1465 | 10.84 | 600 | 0.9738 | 0.5791 | | 0.4158 | 14.46 | 800 | 0.8970 | 0.4873 | | 0.2417 | 18.07 | 1000 | 0.8884 | 0.4374 | | 0.1703 | 21.69 | 1200 | 0.8942 | 0.4164 | | 0.1293 | 25.3 | 1400 | 0.9219 | 0.4093 | | 0.1027 | 28.92 | 1600 | 0.9242 | 0.4032 | ### Framework versions - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.19.0 - Tokenizers 0.15.2
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice_16_1"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "xlsr_hindi", "results": []}]}
suggisingh/xlsr_hindi
null
[ "transformers", "tensorboard", "safetensors", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice_16_1", "base_model:facebook/wav2vec2-xls-r-300m", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:49:48+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_16_1 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #endpoints_compatible #region-us
xlsr\_hindi =========== This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice\_16\_1 dataset. It achieves the following results on the evaluation set: * Loss: 0.9242 * Wer : 0.4032 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 10 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 3 * total\_train\_batch\_size: 30 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * lr\_scheduler\_warmup\_steps: 500 * num\_epochs: 30 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.38.2 * Pytorch 2.2.1+cu121 * Datasets 2.19.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 3\n* total\\_train\\_batch\\_size: 30\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice_16_1 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #endpoints_compatible #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 10\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 3\n* total\\_train\\_batch\\_size: 30\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.19.0\n* Tokenizers 0.15.2" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multilingual-e5-large-guardrail-protected-classes-classifier-training This model is a fine-tuned version of [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "intfloat/multilingual-e5-large", "model-index": [{"name": "multilingual-e5-large-guardrail-protected-classes-classifier-training", "results": []}]}
tosh97/multilingual-e5-large-guardrail-protected-classes-classifier-training
null
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:intfloat/multilingual-e5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:53:01+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #text-classification #generated_from_trainer #base_model-intfloat/multilingual-e5-large #license-mit #autotrain_compatible #endpoints_compatible #region-us
# multilingual-e5-large-guardrail-protected-classes-classifier-training This model is a fine-tuned version of intfloat/multilingual-e5-large on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
[ "# multilingual-e5-large-guardrail-protected-classes-classifier-training\n\nThis model is a fine-tuned version of intfloat/multilingual-e5-large on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #text-classification #generated_from_trainer #base_model-intfloat/multilingual-e5-large #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# multilingual-e5-large-guardrail-protected-classes-classifier-training\n\nThis model is a fine-tuned version of intfloat/multilingual-e5-large on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
null
peft
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gemma-7b-gem This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1198 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.016 | 2 | 1.1601 | | No log | 0.032 | 4 | 1.1470 | | 1.1276 | 0.048 | 6 | 1.1375 | | 1.1276 | 0.064 | 8 | 1.1304 | | 1.1805 | 0.08 | 10 | 1.1252 | | 1.1805 | 0.096 | 12 | 1.1217 | | 1.1805 | 0.112 | 14 | 1.1198 | ### Framework versions - PEFT 0.10.1.dev0 - Transformers 4.40.0.dev0 - Pytorch 2.2.2+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
{"license": "gemma", "library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "google/gemma-2b", "model-index": [{"name": "gemma-7b-gem", "results": []}]}
himanshue2e/gemma-7b-gem
null
[ "peft", "safetensors", "trl", "sft", "generated_from_trainer", "base_model:google/gemma-2b", "license:gemma", "region:us" ]
null
2024-04-17T16:55:39+00:00
[]
[]
TAGS #peft #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us
gemma-7b-gem ============ This model is a fine-tuned version of google/gemma-2b on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 1.1198 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 2.5e-05 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 2 * total\_train\_batch\_size: 16 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: linear * training\_steps: 15 ### Training results ### Framework versions * PEFT 0.10.1.dev0 * Transformers 4.40.0.dev0 * Pytorch 2.2.2+cu121 * Datasets 2.18.0 * Tokenizers 0.15.2
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 15", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
[ "TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-google/gemma-2b #license-gemma #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2.5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* training\\_steps: 15", "### Training results", "### Framework versions\n\n\n* PEFT 0.10.1.dev0\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.2+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
himanshue2e/gemma-7b-gem-finetune
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T16:57:13+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
heyllm234/sc50
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T17:01:01+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
ryanu/EEVE-summarize-10.8b-v0.1
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T17:01:49+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
reinforcement-learning
null
# **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "Reinforce-Pixelcopter-PLE-v0", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Pixelcopter-PLE-v0", "type": "Pixelcopter-PLE-v0"}, "metrics": [{"type": "mean_reward", "value": "59.60 +/- 38.68", "name": "mean_reward", "verified": false}]}]}]}
MLIsaac/Reinforce-Pixelcopter-PLE-v0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-17T17:02:25+00:00
[]
[]
TAGS #Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing Pixelcopter-PLE-v0 This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#Pixelcopter-PLE-v0 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing Pixelcopter-PLE-v0\n This is a trained model of a Reinforce agent playing Pixelcopter-PLE-v0 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
OwOOwO/dumbo-krillin46
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T17:02:51+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # multilingual-e5-large-guardrail-illegal-activities-classifier-training This model is a fine-tuned version of [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "intfloat/multilingual-e5-large", "model-index": [{"name": "multilingual-e5-large-guardrail-illegal-activities-classifier-training", "results": []}]}
tosh97/multilingual-e5-large-guardrail-illegal-activities-classifier-training
null
[ "transformers", "safetensors", "xlm-roberta", "text-classification", "generated_from_trainer", "base_model:intfloat/multilingual-e5-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T17:03:02+00:00
[]
[]
TAGS #transformers #safetensors #xlm-roberta #text-classification #generated_from_trainer #base_model-intfloat/multilingual-e5-large #license-mit #autotrain_compatible #endpoints_compatible #region-us
# multilingual-e5-large-guardrail-illegal-activities-classifier-training This model is a fine-tuned version of intfloat/multilingual-e5-large on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2+cu121 - Datasets 2.17.0 - Tokenizers 0.15.2
[ "# multilingual-e5-large-guardrail-illegal-activities-classifier-training\n\nThis model is a fine-tuned version of intfloat/multilingual-e5-large on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #xlm-roberta #text-classification #generated_from_trainer #base_model-intfloat/multilingual-e5-large #license-mit #autotrain_compatible #endpoints_compatible #region-us \n", "# multilingual-e5-large-guardrail-illegal-activities-classifier-training\n\nThis model is a fine-tuned version of intfloat/multilingual-e5-large on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-06\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 10", "### Framework versions\n\n- Transformers 4.39.3\n- Pytorch 2.1.2+cu121\n- Datasets 2.17.0\n- Tokenizers 0.15.2" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
0x0mom/sl21
null
[ "transformers", "safetensors", "stablelm", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T17:03:14+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
image-classification
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
karan99300/ConvNext-finetuned-CIFAR100
null
[ "transformers", "safetensors", "convnext", "image-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2024-04-17T17:03:55+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #convnext #image-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #convnext #image-classification #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
cilantro9246/8dj4sqt
null
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T17:04:30+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
null
null
# Multi_verse_modelYamshadowexperiment28-7B Multi_verse_modelYamshadowexperiment28-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration. * [automerger/YamshadowExperiment28-7B](https://huggingface.co/automerger/YamshadowExperiment28-7B) ## 🧩 Configuration ```yaml models: - model: MTSAIR/multi_verse_model # No parameters necessary for base model - model: automerger/YamshadowExperiment28-7B parameters: density: 0.53 weight: 0.6 merge_method: dare_ties base_model: MTSAIR/multi_verse_model parameters: int8_mask: true dtype: bfloat16 random_seed: 0 ``` ## 💻 Usage ```python !pip install -qU transformers accelerate from transformers import AutoTokenizer import transformers import torch model = "automerger/Multi_verse_modelYamshadowexperiment28-7B" messages = [{"role": "user", "content": "What is a large language model?"}] tokenizer = AutoTokenizer.from_pretrained(model) prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"], "base_model": ["automerger/YamshadowExperiment28-7B"]}
automerger/Multi_verse_modelYamshadowexperiment28-7B
null
[ "merge", "mergekit", "lazymergekit", "automerger", "base_model:automerger/YamshadowExperiment28-7B", "license:apache-2.0", "region:us" ]
null
2024-04-17T17:04:36+00:00
[]
[]
TAGS #merge #mergekit #lazymergekit #automerger #base_model-automerger/YamshadowExperiment28-7B #license-apache-2.0 #region-us
# Multi_verse_modelYamshadowexperiment28-7B Multi_verse_modelYamshadowexperiment28-7B is an automated merge created by Maxime Labonne using the following configuration. * automerger/YamshadowExperiment28-7B ## Configuration ## Usage
[ "# Multi_verse_modelYamshadowexperiment28-7B\n\nMulti_verse_modelYamshadowexperiment28-7B is an automated merge created by Maxime Labonne using the following configuration.\n* automerger/YamshadowExperiment28-7B", "## Configuration", "## Usage" ]
[ "TAGS\n#merge #mergekit #lazymergekit #automerger #base_model-automerger/YamshadowExperiment28-7B #license-apache-2.0 #region-us \n", "# Multi_verse_modelYamshadowexperiment28-7B\n\nMulti_verse_modelYamshadowexperiment28-7B is an automated merge created by Maxime Labonne using the following configuration.\n* automerger/YamshadowExperiment28-7B", "## Configuration", "## Usage" ]
reinforcement-learning
null
# **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
{"tags": ["CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class"], "model-index": [{"name": "CartPole-v1", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "500.00 +/- 0.00", "name": "mean_reward", "verified": false}]}]}]}
amine-01/CartPole-v1
null
[ "CartPole-v1", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
null
2024-04-17T17:04:44+00:00
[]
[]
TAGS #CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us
# Reinforce Agent playing CartPole-v1 This is a trained model of a Reinforce agent playing CartPole-v1 . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL
[ "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
[ "TAGS\n#CartPole-v1 #reinforce #reinforcement-learning #custom-implementation #deep-rl-class #model-index #region-us \n", "# Reinforce Agent playing CartPole-v1\n This is a trained model of a Reinforce agent playing CartPole-v1 .\n To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: URL" ]
text-to-image
diffusers
<!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - khirodsahoo93/MDP_poster_with_Nitin_Seth <Gallery /> ## Model description These are khirodsahoo93/MDP_poster_with_Nitin_Seth LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of MDP poster to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](khirodsahoo93/MDP_poster_with_Nitin_Seth/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
{"license": "openrail++", "library_name": "diffusers", "tags": ["text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "text-to-image", "diffusers-training", "diffusers", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "a photo of MDP poster", "widget": []}
khirodsahoo93/MDP_poster_with_Nitin_Seth
null
[ "diffusers", "text-to-image", "diffusers-training", "dora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
null
2024-04-17T17:07:18+00:00
[]
[]
TAGS #diffusers #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us
# SDXL LoRA DreamBooth - khirodsahoo93/MDP_poster_with_Nitin_Seth <Gallery /> ## Model description These are khirodsahoo93/MDP_poster_with_Nitin_Seth LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using DreamBooth. LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use a photo of MDP poster to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. Download them in the Files & versions tab. ## Intended uses & limitations #### How to use #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
[ "# SDXL LoRA DreamBooth - khirodsahoo93/MDP_poster_with_Nitin_Seth\n\n<Gallery />", "## Model description\n\nThese are khirodsahoo93/MDP_poster_with_Nitin_Seth LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of MDP poster to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
[ "TAGS\n#diffusers #text-to-image #diffusers-training #dora #template-sd-lora #stable-diffusion-xl #stable-diffusion-xl-diffusers #base_model-stabilityai/stable-diffusion-xl-base-1.0 #license-openrail++ #region-us \n", "# SDXL LoRA DreamBooth - khirodsahoo93/MDP_poster_with_Nitin_Seth\n\n<Gallery />", "## Model description\n\nThese are khirodsahoo93/MDP_poster_with_Nitin_Seth LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.\n\nThe weights were trained using DreamBooth.\n\nLoRA for the text encoder was enabled: False.\n\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.", "## Trigger words\n\nYou should use a photo of MDP poster to trigger the image generation.", "## Download model\n\nWeights for this model are available in Safetensors format.\n\nDownload them in the Files & versions tab.", "## Intended uses & limitations", "#### How to use", "#### Limitations and bias\n\n[TODO: provide examples of latent issues and potential remediations]", "## Training details\n\n[TODO: describe the data used to train the model]" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
Aviral2412/fineturning_WithoutPretraining
null
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T17:09:49+00:00
[ "1910.09700" ]
[]
TAGS #transformers #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
llama.cpp
## Valerie v0.1 Model Card ## Overview Valerie v0.1 is a custom language model created using `llama.cpp` (commit: 532c173) with a context length of 256 tokens, embedding length of 256, 8 heads, and 16 layers. This model was pretrained on a dataset consisting of [female V's](https://cyberpunk.fandom.com/wiki/V_(character)) dialog from [Cyberpunk 2077](https://cyberpunk.fandom.com/wiki/Cyberpunk_Wiki), extracted using the [Voice Over Subtitle Map](https://www.nexusmods.com/cyberpunk2077/mods/2045) mod. ## Model Information ### Full sampling | Model name | Adam iteration | Model filename | Vocabulary size | | ----------------------- | -------------- | -------------------------------------- | --------------- | | Valerie v0.1 Checkpoint | 1750 | chk-valerie-v0.1-256x32-1750.gguf | 32,000 | | Valerie v0.1 Model | 1750 | ggml-valerie-v0.1-256x32-f32-1750.gguf | 32,000 | The `ggml-valerie-v0.1-256x32-f32-1750.gguf` release represents a single epoch of all 51443 samples, completing over 1700 iterations over the entire dataset, and took approximately 3 hours for training. ### Repeat sampling | Model name | Adam iteration | Model filename | Vocabulary size | | ----------------------- | -------------- | ---------------------------------------- | --------------- | | Valerie v0.1 Checkpoint | 3600 | chk-valerie-v0.1-256x32-LATEST.gguf | 32,000 | | Valerie v0.1 Model | 3600 | ggml-valerie-v0.1-256x32-f32-LATEST.gguf | 32,000 | The `ggml-valerie-v0.1-256x32-f32-LATEST.gguf` release represents two epochs of all 51443 samples, completing over 3600 iterations over the entire dataset, and took approximately 6 hours for training. ### Files and versions - ggml-vocab-mistral.gguf: Extracted Mistral 7B model vocabulary. - ggml-valerie-v0.1-256x32-f32-1750.gguf: The pretrained model checkpoint version 1750. - ggml-valerie-v0.1-256x32-f32-LATEST.gguf: The latest pretrained model checkpoint. Currently 3600. ## Settings - Vocabulary size: 32,000 - Context length: 256 tokens - Embedding length: 256 - Heads: 8 - Layers: 16 - Batch size: 32 - Seed: 1 - Saved checkpoint every 50 iterations ## Usage To use Valerie v0.1, follow these steps: 1. Clone the `llama.cpp` library ```sh git clone https://github.com/ggerganov/llama.cpp ``` Reference the `llama.cpp` [README.md](https://github.com/ggerganov/llama.cpp/blob/master/README.md) for more information about building. You can build using raw CPU or even OpenBLAS. CUDA, ROCm, Vulkan, and other backends are also available. Arch Linux Example: ```sh # CPU build using BLAS backend on Arch Linux sudo pacman -S openblas openblas64 make LLAMA_OPENBLAS=1 ``` 2. Download the latest model. ```sh wget https://huggingface.co/teleprint-me/cyberpunk-valerie-v0.1/resolve/main/ggml-valerie-v0.1-256x32-f32-LATEST.gguf?download=true -O ggml-valerie-v0.1-256x32-f32-LATEST.gguf ``` This will download the latest available base model. 3. Perform inference with the latest model checkpoint using the provided command: ```sh ./main -m models/valerie/v0.1/ggml-valerie-v0.1-256x32-f32-LATEST.gguf --color -e -s 1 -c 4096 ``` ## Benchmarks Performance metrics for evaluating v0.1 iteration 3600 on CPU, BLAS, and Vulkan backends. ### llama-bench | model | size | params | backend | threads | test | t/s | | ---------------- | ---------: | ------: | ------- | ------: | ------ | -----------------: | | llama ?B all F32 | 114.53 MiB | 30.02 M | CPU | 8 | pp 512 | 12781.37 ± 2258.61 | | llama ?B all F32 | 114.53 MiB | 30.02 M | CPU | 8 | tg 128 | 410.74 ± 6.13 | | llama ?B all F32 | 114.53 MiB | 30.02 M | BLAS | 8 | pp 512 | 233.53 ± 1.56 | | llama ?B all F32 | 114.53 MiB | 30.02 M | BLAS | 8 | tg 128 | 391.63 ± 14.02 | | llama ?B all F32 | 114.53 MiB | 30.02 M | Vulkan | 99 | pp 512 | 18779.40 ± 111.01 | | llama ?B all F32 | 114.53 MiB | 30.02 M | Vulkan | 99 | tg 128 | 96.25 ± 0.46 | build: ab0dee5 (2686) ### batched-bench - CPU | PP | TG | B | N_KV | T_PP s | S_PP t/s | T_TG s | S_TG t/s | T s | S t/s | |-------|--------|------|--------|----------|----------|----------|----------|----------|----------| | 128 | 128 | 1 | 256 | 0.009 | 14365.88 | 0.345 | 370.86 | 0.354 | 723.06 | | 128 | 128 | 2 | 512 | 0.022 | 11514.42 | 0.377 | 679.29 | 0.399 | 1282.90 | | 128 | 128 | 4 | 1024 | 0.052 | 9811.44 | 0.438 | 1168.69 | 0.490 | 2088.60 | | 128 | 128 | 8 | 2048 | 0.093 | 11067.40 | 0.745 | 1373.82 | 0.838 | 2444.24 | | 128 | 256 | 1 | 384 | 0.011 | 11861.74 | 0.705 | 363.37 | 0.715 | 536.83 | | 128 | 256 | 2 | 768 | 0.022 | 11649.60 | 0.768 | 666.97 | 0.790 | 972.62 | | 128 | 256 | 4 | 1536 | 0.050 | 10252.10 | 0.912 | 1122.94 | 0.962 | 1596.95 | | 256 | 128 | 1 | 384 | 0.021 | 12028.94 | 0.345 | 370.85 | 0.366 | 1047.94 | | 256 | 128 | 2 | 768 | 0.049 | 10351.80 | 0.404 | 633.82 | 0.453 | 1694.02 | | 256 | 128 | 4 | 1536 | 0.118 | 8688.72 | 0.484 | 1058.15 | 0.602 | 2552.70 | | 256 | 256 | 1 | 512 | 0.022 | 11477.76 | 0.715 | 357.83 | 0.738 | 694.02 | | 256 | 256 | 2 | 1024 | 0.050 | 10263.61 | 0.822 | 622.72 | 0.872 | 1174.20 | | 256 | 256 | 4 | 2048 | 0.092 | 11089.45 | 0.990 | 1033.97 | 1.083 | 1891.58 | | 512 | 128 | 1 | 640 | 0.050 | 10235.70 | 0.372 | 344.35 | 0.422 | 1517.52 | | 512 | 128 | 2 | 1280 | 0.093 | 10987.83 | 0.445 | 575.12 | 0.538 | 2377.77 | | 512 | 256 | 1 | 768 | 0.050 | 10208.56 | 0.783 | 326.97 | 0.833 | 921.85 | | 512 | 256 | 2 | 1536 | 0.091 | 11216.51 | 0.925 | 553.26 | 1.017 | 1510.73 | main: n_kv_max = 2048, n_batch = 2048, n_ubatch = 512, is_pp_shared = 0, n_gpu_layers = 999, n_threads = 8, n_threads_batch = 8 ## Citations When using Valerie v0.1 in your research, please remember to cite the following: - aberrio. (2024). Valerie v0.1: A custom language model for female V's dialog from Cyberpunk 2077. <https://huggingface.co/teleprint-me/cyberpunk-valerie-v0.1> - GGML team. (2023). `llama.cpp` version `532c173`. Georgi Gerganov Machine Learning Library. <https://github.com/ggerganov/llama.cpp> - MistralAI (2023). Extracted sentencepiece model vocabulary: <https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2> - julieisdead (2021). Voice Over Subtitle Map: Files that contain the IDs and content for Voice Over files. <https://www.nexusmods.com/cyberpunk2077/mods/2045> - CD Projekt RED (2020). Cyberpunk 2077: GTA is a close second. <https://cyberpunk.net> ### Contributors Austin (teleprint-me) - Created and trained Valerie v0.1 using `llama.cpp` and the referenced dataset. ### Community Join the community of fellow language model enthusiasts and researchers by sharing your knowledge, asking questions, and collaborating on projects related to creating custom models using `llama.cpp`. ### License Valerie v0.1 is released under the CC-BY-NC-SA-3.0 license. You are free to use, modify, and redistribute this model for non-commercial purposes, but you must provide attribution to the original authors and release any derived works under the same license.
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "library_name": "llama.cpp", "tags": ["text-generation", "artificial-intelligence", "not-for-all-audiences"], "pipeline_tag": "text-generation", "inference": false, "license_name": "creative-commons", "license_link": "https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en"}
teleprint-me/cyberpunk-valerie-v0.1
null
[ "llama.cpp", "gguf", "text-generation", "artificial-intelligence", "not-for-all-audiences", "en", "license:cc-by-nc-sa-4.0", "region:us" ]
null
2024-04-17T17:11:30+00:00
[]
[ "en" ]
TAGS #llama.cpp #gguf #text-generation #artificial-intelligence #not-for-all-audiences #en #license-cc-by-nc-sa-4.0 #region-us
Valerie v0.1 Model Card ----------------------- Overview -------- Valerie v0.1 is a custom language model created using 'URL' (commit: 532c173) with a context length of 256 tokens, embedding length of 256, 8 heads, and 16 layers. This model was pretrained on a dataset consisting of female V's) dialog from Cyberpunk 2077, extracted using the Voice Over Subtitle Map mod. Model Information ----------------- ### Full sampling The 'URL' release represents a single epoch of all 51443 samples, completing over 1700 iterations over the entire dataset, and took approximately 3 hours for training. ### Repeat sampling The 'URL' release represents two epochs of all 51443 samples, completing over 3600 iterations over the entire dataset, and took approximately 6 hours for training. ### Files and versions * URL: Extracted Mistral 7B model vocabulary. * URL: The pretrained model checkpoint version 1750. * URL: The latest pretrained model checkpoint. Currently 3600. Settings -------- * Vocabulary size: 32,000 * Context length: 256 tokens * Embedding length: 256 * Heads: 8 * Layers: 16 * Batch size: 32 * Seed: 1 * Saved checkpoint every 50 iterations Usage ----- To use Valerie v0.1, follow these steps: 1. Clone the 'URL' library Reference the 'URL' URL for more information about building. You can build using raw CPU or even OpenBLAS. CUDA, ROCm, Vulkan, and other backends are also available. Arch Linux Example: 2. Download the latest model. This will download the latest available base model. 3. Perform inference with the latest model checkpoint using the provided command: Benchmarks ---------- Performance metrics for evaluating v0.1 iteration 3600 on CPU, BLAS, and Vulkan backends. ### llama-bench build: ab0dee5 (2686) ### batched-bench - CPU main: n\_kv\_max = 2048, n\_batch = 2048, n\_ubatch = 512, is\_pp\_shared = 0, n\_gpu\_layers = 999, n\_threads = 8, n\_threads\_batch = 8 s When using Valerie v0.1 in your research, please remember to cite the following: * aberrio. (2024). Valerie v0.1: A custom language model for female V's dialog from Cyberpunk 2077. <URL * GGML team. (2023). 'URL' version '532c173'. Georgi Gerganov Machine Learning Library. <URL * MistralAI (2023). Extracted sentencepiece model vocabulary: <URL * julieisdead (2021). Voice Over Subtitle Map: Files that contain the IDs and content for Voice Over files. <URL * CD Projekt RED (2020). Cyberpunk 2077: GTA is a close second. ### Contributors Austin (teleprint-me) - Created and trained Valerie v0.1 using 'URL' and the referenced dataset. ### Community Join the community of fellow language model enthusiasts and researchers by sharing your knowledge, asking questions, and collaborating on projects related to creating custom models using 'URL'. ### License Valerie v0.1 is released under the CC-BY-NC-SA-3.0 license. You are free to use, modify, and redistribute this model for non-commercial purposes, but you must provide attribution to the original authors and release any derived works under the same license.
[ "### Full sampling\n\n\n\nThe 'URL' release represents a single epoch of all 51443 samples, completing over 1700 iterations over the entire dataset, and took approximately 3 hours for training.", "### Repeat sampling\n\n\n\nThe 'URL' release represents two epochs of all 51443 samples, completing over 3600 iterations over the entire dataset, and took approximately 6 hours for training.", "### Files and versions\n\n\n* URL: Extracted Mistral 7B model vocabulary.\n* URL: The pretrained model checkpoint version 1750.\n* URL: The latest pretrained model checkpoint. Currently 3600.\n\n\nSettings\n--------\n\n\n* Vocabulary size: 32,000\n* Context length: 256 tokens\n* Embedding length: 256\n* Heads: 8\n* Layers: 16\n* Batch size: 32\n* Seed: 1\n* Saved checkpoint every 50 iterations\n\n\nUsage\n-----\n\n\nTo use Valerie v0.1, follow these steps:\n\n\n1. Clone the 'URL' library\n\n\nReference the 'URL' URL for more information about building. You can build using raw CPU or even OpenBLAS. CUDA, ROCm, Vulkan, and other backends are also available.\n\n\nArch Linux Example:\n\n\n2. Download the latest model.\n\n\nThis will download the latest available base model.\n\n\n3. Perform inference with the latest model checkpoint using the provided command:\n\n\nBenchmarks\n----------\n\n\nPerformance metrics for evaluating v0.1 iteration 3600 on CPU, BLAS, and Vulkan backends.", "### llama-bench\n\n\n\nbuild: ab0dee5 (2686)", "### batched-bench - CPU\n\n\n\nmain: n\\_kv\\_max = 2048, n\\_batch = 2048, n\\_ubatch = 512, is\\_pp\\_shared = 0, n\\_gpu\\_layers = 999, n\\_threads = 8, n\\_threads\\_batch = 8\n\n\ns\n\n\nWhen using Valerie v0.1 in your research, please remember to cite the following:\n\n\n* aberrio. (2024). Valerie v0.1: A custom language model for female V's dialog from Cyberpunk 2077. <URL\n* GGML team. (2023). 'URL' version '532c173'. Georgi Gerganov Machine Learning Library. <URL\n* MistralAI (2023). Extracted sentencepiece model vocabulary: <URL\n* julieisdead (2021). Voice Over Subtitle Map: Files that contain the IDs and content for Voice Over files. <URL\n* CD Projekt RED (2020). Cyberpunk 2077: GTA is a close second.", "### Contributors\n\n\nAustin (teleprint-me) - Created and trained Valerie v0.1 using 'URL' and the referenced dataset.", "### Community\n\n\nJoin the community of fellow language model enthusiasts and researchers by sharing your knowledge, asking questions, and collaborating on projects related to creating custom models using 'URL'.", "### License\n\n\nValerie v0.1 is released under the CC-BY-NC-SA-3.0 license. You are free to use, modify, and redistribute this model for non-commercial purposes, but you must provide attribution to the original authors and release any derived works under the same license." ]
[ "TAGS\n#llama.cpp #gguf #text-generation #artificial-intelligence #not-for-all-audiences #en #license-cc-by-nc-sa-4.0 #region-us \n", "### Full sampling\n\n\n\nThe 'URL' release represents a single epoch of all 51443 samples, completing over 1700 iterations over the entire dataset, and took approximately 3 hours for training.", "### Repeat sampling\n\n\n\nThe 'URL' release represents two epochs of all 51443 samples, completing over 3600 iterations over the entire dataset, and took approximately 6 hours for training.", "### Files and versions\n\n\n* URL: Extracted Mistral 7B model vocabulary.\n* URL: The pretrained model checkpoint version 1750.\n* URL: The latest pretrained model checkpoint. Currently 3600.\n\n\nSettings\n--------\n\n\n* Vocabulary size: 32,000\n* Context length: 256 tokens\n* Embedding length: 256\n* Heads: 8\n* Layers: 16\n* Batch size: 32\n* Seed: 1\n* Saved checkpoint every 50 iterations\n\n\nUsage\n-----\n\n\nTo use Valerie v0.1, follow these steps:\n\n\n1. Clone the 'URL' library\n\n\nReference the 'URL' URL for more information about building. You can build using raw CPU or even OpenBLAS. CUDA, ROCm, Vulkan, and other backends are also available.\n\n\nArch Linux Example:\n\n\n2. Download the latest model.\n\n\nThis will download the latest available base model.\n\n\n3. Perform inference with the latest model checkpoint using the provided command:\n\n\nBenchmarks\n----------\n\n\nPerformance metrics for evaluating v0.1 iteration 3600 on CPU, BLAS, and Vulkan backends.", "### llama-bench\n\n\n\nbuild: ab0dee5 (2686)", "### batched-bench - CPU\n\n\n\nmain: n\\_kv\\_max = 2048, n\\_batch = 2048, n\\_ubatch = 512, is\\_pp\\_shared = 0, n\\_gpu\\_layers = 999, n\\_threads = 8, n\\_threads\\_batch = 8\n\n\ns\n\n\nWhen using Valerie v0.1 in your research, please remember to cite the following:\n\n\n* aberrio. (2024). Valerie v0.1: A custom language model for female V's dialog from Cyberpunk 2077. <URL\n* GGML team. (2023). 'URL' version '532c173'. Georgi Gerganov Machine Learning Library. <URL\n* MistralAI (2023). Extracted sentencepiece model vocabulary: <URL\n* julieisdead (2021). Voice Over Subtitle Map: Files that contain the IDs and content for Voice Over files. <URL\n* CD Projekt RED (2020). Cyberpunk 2077: GTA is a close second.", "### Contributors\n\n\nAustin (teleprint-me) - Created and trained Valerie v0.1 using 'URL' and the referenced dataset.", "### Community\n\n\nJoin the community of fellow language model enthusiasts and researchers by sharing your knowledge, asking questions, and collaborating on projects related to creating custom models using 'URL'.", "### License\n\n\nValerie v0.1 is released under the CC-BY-NC-SA-3.0 license. You are free to use, modify, and redistribute this model for non-commercial purposes, but you must provide attribution to the original authors and release any derived works under the same license." ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # V0417MADP4 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1454 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 60 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 8.3437 | 0.09 | 10 | 2.9727 | | 6.7169 | 0.18 | 20 | 2.7571 | | 4.7246 | 0.27 | 30 | 2.3872 | | 2.7752 | 0.36 | 40 | 1.7401 | | 1.361 | 0.45 | 50 | 1.0587 | | 0.6269 | 0.54 | 60 | 0.6595 | | 0.333 | 0.63 | 70 | 0.3442 | | 0.2172 | 0.73 | 80 | 0.2248 | | 0.1846 | 0.82 | 90 | 0.2079 | | 0.1761 | 0.91 | 100 | 0.1780 | | 0.1761 | 1.0 | 110 | 0.1788 | | 0.171 | 1.09 | 120 | 0.1687 | | 0.161 | 1.18 | 130 | 0.1565 | | 0.1566 | 1.27 | 140 | 0.1558 | | 0.2021 | 1.36 | 150 | 0.1842 | | 0.1681 | 1.45 | 160 | 0.1545 | | 0.1668 | 1.54 | 170 | 0.1516 | | 0.1642 | 1.63 | 180 | 0.1501 | | 0.1685 | 1.72 | 190 | 0.1599 | | 0.1685 | 1.81 | 200 | 0.1543 | | 0.1643 | 1.9 | 210 | 0.1679 | | 0.1608 | 1.99 | 220 | 0.1575 | | 0.1593 | 2.08 | 230 | 0.1475 | | 0.1539 | 2.18 | 240 | 0.1490 | | 0.1511 | 2.27 | 250 | 0.1463 | | 0.1543 | 2.36 | 260 | 0.1468 | | 0.1534 | 2.45 | 270 | 0.1477 | | 0.1524 | 2.54 | 280 | 0.1462 | | 0.1513 | 2.63 | 290 | 0.1457 | | 0.153 | 2.72 | 300 | 0.1457 | | 0.1516 | 2.81 | 310 | 0.1454 | | 0.153 | 2.9 | 320 | 0.1454 | | 0.1535 | 2.99 | 330 | 0.1454 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0417MADP4", "results": []}]}
Litzy619/V0417MADP4
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-04-17T17:13:18+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
V0417MADP4 ========== This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1454 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 60 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
null
null
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # V0417MADP2 This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1574 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine_with_restarts - lr_scheduler_warmup_steps: 60 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 8.5044 | 0.09 | 10 | 3.0469 | | 7.1174 | 0.18 | 20 | 2.9171 | | 6.0751 | 0.27 | 30 | 2.6390 | | 4.2708 | 0.36 | 40 | 2.0163 | | 2.4666 | 0.45 | 50 | 1.5118 | | 1.3427 | 0.54 | 60 | 0.8933 | | 0.5622 | 0.63 | 70 | 0.4358 | | 0.2583 | 0.73 | 80 | 0.2698 | | 0.2135 | 0.82 | 90 | 0.2154 | | 0.1981 | 0.91 | 100 | 0.1957 | | 0.1955 | 1.0 | 110 | 0.1945 | | 0.2021 | 1.09 | 120 | 0.2029 | | 0.1932 | 1.18 | 130 | 0.1893 | | 0.1726 | 1.27 | 140 | 0.1965 | | 0.1813 | 1.36 | 150 | 0.1825 | | 0.1865 | 1.45 | 160 | 0.1699 | | 0.1787 | 1.54 | 170 | 0.1609 | | 0.1634 | 1.63 | 180 | 0.1666 | | 0.1673 | 1.72 | 190 | 0.1703 | | 0.2204 | 1.81 | 200 | 0.1684 | | 0.1751 | 1.9 | 210 | 0.1619 | | 0.1656 | 1.99 | 220 | 0.1665 | | 0.1717 | 2.08 | 230 | 0.1583 | | 0.1664 | 2.18 | 240 | 0.1635 | | 0.1682 | 2.27 | 250 | 0.1628 | | 0.1729 | 2.36 | 260 | 0.1635 | | 0.1703 | 2.45 | 270 | 0.1622 | | 0.168 | 2.54 | 280 | 0.1578 | | 0.1588 | 2.63 | 290 | 0.1564 | | 0.1554 | 2.72 | 300 | 0.1571 | | 0.1566 | 2.81 | 310 | 0.1573 | | 0.1602 | 2.9 | 320 | 0.1572 | | 0.1587 | 2.99 | 330 | 0.1574 | ### Framework versions - Transformers 4.36.0.dev0 - Pytorch 2.1.2+cu121 - Datasets 2.14.6 - Tokenizers 0.14.1
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/phi-2", "model-index": [{"name": "V0417MADP2", "results": []}]}
Litzy619/V0417MADP2
null
[ "safetensors", "generated_from_trainer", "base_model:microsoft/phi-2", "license:mit", "region:us" ]
null
2024-04-17T17:13:20+00:00
[]
[]
TAGS #safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us
V0417MADP2 ========== This model is a fine-tuned version of microsoft/phi-2 on an unknown dataset. It achieves the following results on the evaluation set: * Loss: 0.1574 Model description ----------------- More information needed Intended uses & limitations --------------------------- More information needed Training and evaluation data ---------------------------- More information needed Training procedure ------------------ ### Training hyperparameters The following hyperparameters were used during training: * learning\_rate: 0.0003 * train\_batch\_size: 8 * eval\_batch\_size: 8 * seed: 42 * gradient\_accumulation\_steps: 16 * total\_train\_batch\_size: 128 * optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 * lr\_scheduler\_type: cosine\_with\_restarts * lr\_scheduler\_warmup\_steps: 60 * num\_epochs: 3 * mixed\_precision\_training: Native AMP ### Training results ### Framework versions * Transformers 4.36.0.dev0 * Pytorch 2.1.2+cu121 * Datasets 2.14.6 * Tokenizers 0.14.1
[ "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
[ "TAGS\n#safetensors #generated_from_trainer #base_model-microsoft/phi-2 #license-mit #region-us \n", "### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\\_with\\_restarts\n* lr\\_scheduler\\_warmup\\_steps: 60\n* num\\_epochs: 3\n* mixed\\_precision\\_training: Native AMP", "### Training results", "### Framework versions\n\n\n* Transformers 4.36.0.dev0\n* Pytorch 2.1.2+cu121\n* Datasets 2.14.6\n* Tokenizers 0.14.1" ]
null
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": ["unsloth"]}
Existance/Mistral-7b-Hindi-SFT
null
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T17:14:21+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #unsloth #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
image-classification
transformers
# rare-puppers Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### cat ![cat](images/cat.jpg) #### chicken ![chicken](images/chicken.jpg) #### cow ![cow](images/cow.jpg) #### dog ![dog](images/dog.jpg) #### fish ![fish](images/fish.jpg)
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
emlababia/rare-puppers
null
[ "transformers", "tensorboard", "safetensors", "vit", "image-classification", "pytorch", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2024-04-17T17:19:15+00:00
[]
[]
TAGS #transformers #tensorboard #safetensors #vit #image-classification #pytorch #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us
# rare-puppers Autogenerated by HuggingPics️ Create your own image classifier for anything by running the demo on Google Colab. Report any issues with the demo at the github repo. ## Example Images #### cat !cat #### chicken !chicken #### cow !cow #### dog !dog #### fish !fish
[ "# rare-puppers\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### cat\n\n!cat", "#### chicken\n\n!chicken", "#### cow\n\n!cow", "#### dog\n\n!dog", "#### fish\n\n!fish" ]
[ "TAGS\n#transformers #tensorboard #safetensors #vit #image-classification #pytorch #huggingpics #model-index #autotrain_compatible #endpoints_compatible #region-us \n", "# rare-puppers\n\n\nAutogenerated by HuggingPics️\n\nCreate your own image classifier for anything by running the demo on Google Colab.\n\nReport any issues with the demo at the github repo.", "## Example Images", "#### cat\n\n!cat", "#### chicken\n\n!chicken", "#### cow\n\n!cow", "#### dog\n\n!dog", "#### fish\n\n!fish" ]
feature-extraction
transformers
# medical-qa ## 🚀 Overview This project focuses on developing a semantic search system for medical question answering fine-tuning pre-trained language models. The project utilizes datasets sourced from Hugging Face, containing medical question-answer pairs, which are processed to ensure compatibility with the chosen model architecture. The base model fine-tuned for this task is the BAAI/bge-small-en-v1.5 model from the SentenceTransformers library, and the details for the [fine-tuned model](https://huggingface.co/aleynahukmet/bge-medical-small-en-v1.5) can be found below. ## Motivation This project addresses the critical need for efficient and accurate medical information retrieval systems. Traditional methods often struggle with the nuanced semantics of medical queries. By leveraging advanced natural language processing techniques and large-scale medical datasets, we aim to streamline the process, empowering healthcare professionals and patients with timely, reliable, and personalized medical information. Ultimately, our goal is to contribute to the efforts to make medical information more accessible. ## Dataset This project incorporates multiple datasets sourced from the Hugging Face platform, each contributing to the training and evaluation of the semantic search system: 1. [keivalya/MedQuad-MedicalQnADataset](https://huggingface.co/datasets/keivalya/MedQuad-MedicalQnADataset) - Description: This medical question-answer pairs intended for training machine learning models in the medical domain. It consists of three columns: 'qtype' denoting the question type, 'Question' representing the medical queries, and 'Answer' providing corresponding answers. The dataset comprises 16,407 samples in the training split. 2. [medalpaca/medical_meadow_wikidoc](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc) - Description: This dataset, consisting of 10,000 samples in the training split, contains medical information in the form of instructions, input, and output. The 'instruction' column provides contextual information, while the 'input' and 'output' columns contain medical queries and their corresponding answers, respectively. 3. [medalpaca/medical_meadow_medical_flashcards](https://huggingface.co/datasets/medalpaca/medical_meadow_medical_flashcards) - Description: This dataset comprises 33,955 samples in the training split. It follows a similar structure to the medical_meadow_wikidoc dataset, with 'instruction', 'input', and 'output' columns. The ' instruction' column provides additional context, while the 'input' and 'output' columns contain medical questions and their corresponding answers. 4. [medalpaca/medical_meadow_wikidoc_patient_information](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc_patient_information) - Description: With 5,942 samples in the training split, this dataset focuses on patient information within the medical domain. It features 'instruction', 'input', and 'output' columns similar to the previous datasets, where 'input' represents medical queries and 'output' denotes corresponding answers. After preprocessing, which involves removing irrelevant columns and renaming columns for uniformity, the datasets are concatenated into a single dataset. The resulting dataset contains 53,043 samples in the training split and 13,261 samples in the test split, with 'question' and 'answer' columns representing medical queries and their corresponding answers, respectively. ## Usage The model is hosted on the Hugging Face model hub at [aleynahukmet/bge-medical-small-en-v1.5](https://huggingface.co/aleynahukmet/bge-medical-small-en-v1.5/), and it's easy to use it with the Sentence Transformers library. 1. Install Sentence Transformers Library: Ensure you have the Sentence Transformers library installed. You can install it via pip if you haven't already: ``` pip install sentence-transformers ``` 2. Load the Model: Once the model is downloaded, you can load it into your Python environment using the SentenceTransformer class: ``` from sentence_transformers import SentenceTransformer model_name = "aleynahukmet/bge-medical-small-en-v1.5" model = SentenceTransformer(model_name) ``` 3. Encode Medical Texts: You can now use the loaded model to encode medical texts into fixed-dimensional vectors. For example: ``` #Example medical text medical_text = "A 45-year-old male presents with chest pain and shortness of breath." #Encode the medical text encoded_text = model.encode(medical_text) ``` 4. Utilize Encoded Vectors: The encoded vectors can be used for various downstream tasks, such as semantic search, clustering, or classification, depending on your specific application needs. ## Training: You can review the code for fine-tuning in this [notebook](https://github.com/aleynahukmet/medical-qa/blob/main/medical-qa.ipynb). ## Evaluation: I used Translation Evaluator to evaluate the model on the test set, and it achieved ~0.887 (a 10-point improvement from 0.78 for the base model). ## Requirements: ``` datasets==2.18.0 numpy==1.24.4 pandas==2.0.3 sentence_transformers==2.5.1 torch==2.0.1 ``` If you don't have the requirements installed, they can be installed with the following: ``` pip install -r requirements.txt ```
{}
aleynahukmet/bge-medical-small-en-v1.5
null
[ "transformers", "safetensors", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2024-04-17T17:20:01+00:00
[]
[]
TAGS #transformers #safetensors #bert #feature-extraction #endpoints_compatible #region-us
# medical-qa ## Overview This project focuses on developing a semantic search system for medical question answering fine-tuning pre-trained language models. The project utilizes datasets sourced from Hugging Face, containing medical question-answer pairs, which are processed to ensure compatibility with the chosen model architecture. The base model fine-tuned for this task is the BAAI/bge-small-en-v1.5 model from the SentenceTransformers library, and the details for the fine-tuned model can be found below. ## Motivation This project addresses the critical need for efficient and accurate medical information retrieval systems. Traditional methods often struggle with the nuanced semantics of medical queries. By leveraging advanced natural language processing techniques and large-scale medical datasets, we aim to streamline the process, empowering healthcare professionals and patients with timely, reliable, and personalized medical information. Ultimately, our goal is to contribute to the efforts to make medical information more accessible. ## Dataset This project incorporates multiple datasets sourced from the Hugging Face platform, each contributing to the training and evaluation of the semantic search system: 1. keivalya/MedQuad-MedicalQnADataset - Description: This medical question-answer pairs intended for training machine learning models in the medical domain. It consists of three columns: 'qtype' denoting the question type, 'Question' representing the medical queries, and 'Answer' providing corresponding answers. The dataset comprises 16,407 samples in the training split. 2. medalpaca/medical_meadow_wikidoc - Description: This dataset, consisting of 10,000 samples in the training split, contains medical information in the form of instructions, input, and output. The 'instruction' column provides contextual information, while the 'input' and 'output' columns contain medical queries and their corresponding answers, respectively. 3. medalpaca/medical_meadow_medical_flashcards - Description: This dataset comprises 33,955 samples in the training split. It follows a similar structure to the medical_meadow_wikidoc dataset, with 'instruction', 'input', and 'output' columns. The ' instruction' column provides additional context, while the 'input' and 'output' columns contain medical questions and their corresponding answers. 4. medalpaca/medical_meadow_wikidoc_patient_information - Description: With 5,942 samples in the training split, this dataset focuses on patient information within the medical domain. It features 'instruction', 'input', and 'output' columns similar to the previous datasets, where 'input' represents medical queries and 'output' denotes corresponding answers. After preprocessing, which involves removing irrelevant columns and renaming columns for uniformity, the datasets are concatenated into a single dataset. The resulting dataset contains 53,043 samples in the training split and 13,261 samples in the test split, with 'question' and 'answer' columns representing medical queries and their corresponding answers, respectively. ## Usage The model is hosted on the Hugging Face model hub at aleynahukmet/bge-medical-small-en-v1.5, and it's easy to use it with the Sentence Transformers library. 1. Install Sentence Transformers Library: Ensure you have the Sentence Transformers library installed. You can install it via pip if you haven't already: 2. Load the Model: Once the model is downloaded, you can load it into your Python environment using the SentenceTransformer class: 3. Encode Medical Texts: You can now use the loaded model to encode medical texts into fixed-dimensional vectors. For example: 4. Utilize Encoded Vectors: The encoded vectors can be used for various downstream tasks, such as semantic search, clustering, or classification, depending on your specific application needs. ## Training: You can review the code for fine-tuning in this notebook. ## Evaluation: I used Translation Evaluator to evaluate the model on the test set, and it achieved ~0.887 (a 10-point improvement from 0.78 for the base model). ## Requirements: If you don't have the requirements installed, they can be installed with the following:
[ "# medical-qa", "## Overview\n\nThis project focuses on developing a semantic search system for medical question answering fine-tuning pre-trained language models. The project utilizes datasets sourced from Hugging Face, containing medical question-answer pairs, which are processed to ensure compatibility with the chosen model architecture. The base model fine-tuned for this task is the BAAI/bge-small-en-v1.5 model from the SentenceTransformers library, and the details for the fine-tuned model can be found below.", "## Motivation\n\nThis project addresses the critical need for efficient and accurate medical information retrieval systems. Traditional methods often struggle with the nuanced semantics of medical queries. By leveraging advanced natural language processing techniques and large-scale medical datasets, we aim to streamline the process, empowering healthcare professionals and patients with timely, reliable, and personalized medical information. Ultimately, our goal is to contribute to the efforts to make medical information more accessible.", "## Dataset\n\nThis project incorporates multiple datasets sourced from the Hugging Face platform, each contributing to the training and evaluation of the semantic search system:\n\n1. keivalya/MedQuad-MedicalQnADataset\n - Description: This medical question-answer pairs intended for training machine learning models in the medical domain. It consists of three columns: 'qtype' denoting the \n question type, 'Question' representing the medical queries, and 'Answer' providing corresponding answers. The dataset comprises 16,407 samples in the training split.\n2. medalpaca/medical_meadow_wikidoc\n - Description: This dataset, consisting of 10,000 samples in the training split, contains medical information in the form of instructions, input, and output. The 'instruction' column provides contextual \n information, while the 'input' and 'output' columns contain medical queries and their corresponding answers, respectively.\n3. medalpaca/medical_meadow_medical_flashcards\n - Description: This dataset comprises 33,955 samples in the training split. It follows a similar structure to the medical_meadow_wikidoc dataset, with 'instruction', 'input', and 'output' columns. The ' \n instruction' column provides additional context, while the 'input' and 'output' columns contain medical questions and their corresponding answers.\n4. medalpaca/medical_meadow_wikidoc_patient_information\n - Description: With 5,942 samples in the training split, this dataset focuses on patient information within the medical domain. It features 'instruction', 'input', and 'output' columns similar to the previous \n datasets, where 'input' represents medical queries and 'output' denotes corresponding answers.\n\nAfter preprocessing, which involves removing irrelevant columns and renaming columns for uniformity, the datasets are concatenated into a single dataset. The resulting dataset contains 53,043 samples in the training split and 13,261 samples in the test split, with 'question' and 'answer' columns representing medical queries and their corresponding answers, respectively.", "## Usage\n\nThe model is hosted on the Hugging Face model hub at aleynahukmet/bge-medical-small-en-v1.5, and it's easy to use it with the Sentence Transformers library.\n\n1. Install Sentence Transformers Library:\n Ensure you have the Sentence Transformers library installed. You can install it via pip if you haven't already:\n \n \n \n2. Load the Model:\n Once the model is downloaded, you can load it into your Python environment using the SentenceTransformer class:\n\n \n\n3. Encode Medical Texts:\n You can now use the loaded model to encode medical texts into fixed-dimensional vectors. For example:\n\n \n\n4. Utilize Encoded Vectors:\n The encoded vectors can be used for various downstream tasks, such as semantic search, clustering, or classification, depending on your specific application needs.", "## Training:\nYou can review the code for fine-tuning in this notebook.", "## Evaluation:\n\nI used Translation Evaluator to evaluate the model on the test set, and it achieved ~0.887 (a 10-point improvement from 0.78 for the base model).", "## Requirements:\n\n \nIf you don't have the requirements installed, they can be installed with the following:" ]
[ "TAGS\n#transformers #safetensors #bert #feature-extraction #endpoints_compatible #region-us \n", "# medical-qa", "## Overview\n\nThis project focuses on developing a semantic search system for medical question answering fine-tuning pre-trained language models. The project utilizes datasets sourced from Hugging Face, containing medical question-answer pairs, which are processed to ensure compatibility with the chosen model architecture. The base model fine-tuned for this task is the BAAI/bge-small-en-v1.5 model from the SentenceTransformers library, and the details for the fine-tuned model can be found below.", "## Motivation\n\nThis project addresses the critical need for efficient and accurate medical information retrieval systems. Traditional methods often struggle with the nuanced semantics of medical queries. By leveraging advanced natural language processing techniques and large-scale medical datasets, we aim to streamline the process, empowering healthcare professionals and patients with timely, reliable, and personalized medical information. Ultimately, our goal is to contribute to the efforts to make medical information more accessible.", "## Dataset\n\nThis project incorporates multiple datasets sourced from the Hugging Face platform, each contributing to the training and evaluation of the semantic search system:\n\n1. keivalya/MedQuad-MedicalQnADataset\n - Description: This medical question-answer pairs intended for training machine learning models in the medical domain. It consists of three columns: 'qtype' denoting the \n question type, 'Question' representing the medical queries, and 'Answer' providing corresponding answers. The dataset comprises 16,407 samples in the training split.\n2. medalpaca/medical_meadow_wikidoc\n - Description: This dataset, consisting of 10,000 samples in the training split, contains medical information in the form of instructions, input, and output. The 'instruction' column provides contextual \n information, while the 'input' and 'output' columns contain medical queries and their corresponding answers, respectively.\n3. medalpaca/medical_meadow_medical_flashcards\n - Description: This dataset comprises 33,955 samples in the training split. It follows a similar structure to the medical_meadow_wikidoc dataset, with 'instruction', 'input', and 'output' columns. The ' \n instruction' column provides additional context, while the 'input' and 'output' columns contain medical questions and their corresponding answers.\n4. medalpaca/medical_meadow_wikidoc_patient_information\n - Description: With 5,942 samples in the training split, this dataset focuses on patient information within the medical domain. It features 'instruction', 'input', and 'output' columns similar to the previous \n datasets, where 'input' represents medical queries and 'output' denotes corresponding answers.\n\nAfter preprocessing, which involves removing irrelevant columns and renaming columns for uniformity, the datasets are concatenated into a single dataset. The resulting dataset contains 53,043 samples in the training split and 13,261 samples in the test split, with 'question' and 'answer' columns representing medical queries and their corresponding answers, respectively.", "## Usage\n\nThe model is hosted on the Hugging Face model hub at aleynahukmet/bge-medical-small-en-v1.5, and it's easy to use it with the Sentence Transformers library.\n\n1. Install Sentence Transformers Library:\n Ensure you have the Sentence Transformers library installed. You can install it via pip if you haven't already:\n \n \n \n2. Load the Model:\n Once the model is downloaded, you can load it into your Python environment using the SentenceTransformer class:\n\n \n\n3. Encode Medical Texts:\n You can now use the loaded model to encode medical texts into fixed-dimensional vectors. For example:\n\n \n\n4. Utilize Encoded Vectors:\n The encoded vectors can be used for various downstream tasks, such as semantic search, clustering, or classification, depending on your specific application needs.", "## Training:\nYou can review the code for fine-tuning in this notebook.", "## Evaluation:\n\nI used Translation Evaluator to evaluate the model on the test set, and it achieved ~0.887 (a 10-point improvement from 0.78 for the base model).", "## Requirements:\n\n \nIf you don't have the requirements installed, they can be installed with the following:" ]
null
transformers
## About <!-- ### quantize_version: 1 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: --> <!-- ### vocab_type: --> static quants of https://huggingface.co/DZgas/GIGABATEMAN-7B <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q2_K.gguf) | Q2_K | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.IQ3_XS.gguf) | IQ3_XS | 3.1 | | | [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.IQ3_M.gguf) | IQ3_M | 3.4 | | | [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | | | [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | | | [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality | | [GGUF](https://huggingface.co/mradermacher/GIGABATEMAN-7B-GGUF/resolve/main/GIGABATEMAN-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
{"language": ["en"], "library_name": "transformers", "tags": ["mistral", "llama", "nsfw", "roleplay", "merge"], "base_model": "DZgas/GIGABATEMAN-7B", "quantized_by": "mradermacher"}
mradermacher/GIGABATEMAN-7B-GGUF
null
[ "transformers", "gguf", "mistral", "llama", "nsfw", "roleplay", "merge", "en", "base_model:DZgas/GIGABATEMAN-7B", "endpoints_compatible", "region:us" ]
null
2024-04-17T17:25:49+00:00
[]
[ "en" ]
TAGS #transformers #gguf #mistral #llama #nsfw #roleplay #merge #en #base_model-DZgas/GIGABATEMAN-7B #endpoints_compatible #region-us
About ----- static quants of URL weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. Usage ----- If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files. Provided Quants --------------- (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): !URL And here are Artefact2's thoughts on the matter: URL FAQ / Model Request ------------------- See URL for some answers to questions you might have and/or if you want some other model quantized. Thanks ------ I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time.
[]
[ "TAGS\n#transformers #gguf #mistral #llama #nsfw #roleplay #merge #en #base_model-DZgas/GIGABATEMAN-7B #endpoints_compatible #region-us \n" ]
feature-extraction
transformers
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
{"library_name": "transformers", "tags": []}
EinsZwo/mlm-not-mixed-justbert-fullset
null
[ "transformers", "safetensors", "bert", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2024-04-17T17:27:25+00:00
[ "1910.09700" ]
[]
TAGS #transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us
# Model Card for Model ID ## Model Details ### Model Description This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated. - Developed by: - Funded by [optional]: - Shared by [optional]: - Model type: - Language(s) (NLP): - License: - Finetuned from model [optional]: ### Model Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Downstream Use [optional] ### Out-of-Scope Use ## Bias, Risks, and Limitations ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. ## Training Details ### Training Data ### Training Procedure #### Preprocessing [optional] #### Training Hyperparameters - Training regime: #### Speeds, Sizes, Times [optional] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data #### Factors #### Metrics ### Results #### Summary ## Model Examination [optional] ## Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019). - Hardware Type: - Hours used: - Cloud Provider: - Compute Region: - Carbon Emitted: ## Technical Specifications [optional] ### Model Architecture and Objective ### Compute Infrastructure #### Hardware #### Software [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Model Card Authors [optional] ## Model Card Contact
[ "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
[ "TAGS\n#transformers #safetensors #bert #feature-extraction #arxiv-1910.09700 #endpoints_compatible #region-us \n", "# Model Card for Model ID", "## Model Details", "### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:", "### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Downstream Use [optional]", "### Out-of-Scope Use", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.", "## How to Get Started with the Model\n\nUse the code below to get started with the model.", "## Training Details", "### Training Data", "### Training Procedure", "#### Preprocessing [optional]", "#### Training Hyperparameters\n\n- Training regime:", "#### Speeds, Sizes, Times [optional]", "## Evaluation", "### Testing Data, Factors & Metrics", "#### Testing Data", "#### Factors", "#### Metrics", "### Results", "#### Summary", "## Model Examination [optional]", "## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:", "## Technical Specifications [optional]", "### Model Architecture and Objective", "### Compute Infrastructure", "#### Hardware", "#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Model Card Authors [optional]", "## Model Card Contact" ]
text-generation
null
# Mixtral-8x22B-Instruct-v0.1-GGUF The GGUF and quantized models here are based on [mistralai/Mixtral-8x22B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1) model ## How to download You can download only the quants you need instead of cloning the entire repository as follows: ``` huggingface-cli download MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF --local-dir . --include '*Q2_K*gguf' ``` ## Load sharded model `llama_load_model_from_file` will detect the number of files and will load additional tensors from the rest of files. ```sh llama.cpp/main -m Mixtral-8x22B-Instruct-v0.1.Q2_K-00001-of-00005.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 1024 -e ``` Original README --- # Model Card for Mixtral-8x22B-Instruct-v0.1 The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the [Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1). ## Run the model ```python from transformers import AutoModelForCausalLM from mistral_common.protocol.instruct.messages import ( AssistantMessage, UserMessage, ) from mistral_common.protocol.instruct.tool_calls import ( Tool, Function, ) from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.instruct.normalize import ChatCompletionRequest device = "cuda" # the device to load the model onto tokenizer_v3 = MistralTokenizer.v3() mistral_query = ChatCompletionRequest( tools=[ Tool( function=Function( name="get_current_weather", description="Get the current weather", parameters={ "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", }, "format": { "type": "string", "enum": ["celsius", "fahrenheit"], "description": "The temperature unit to use. Infer this from the users location.", }, }, "required": ["location", "format"], }, ) ) ], messages=[ UserMessage(content="What's the weather like today in Paris"), ], model="test", ) encodeds = tokenizer_v3.encode_chat_completion(mistral_query).tokens model = AutoModelForCausalLM.from_pretrained("mistralai/Mixtral-8x22B-Instruct-v0.1") model_inputs = encodeds.to(device) model.to(device) generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) sp_tokenizer = tokenizer_v3.instruct_tokenizer.tokenizer decoded = sp_tokenizer.decode(generated_ids[0]) print(decoded) ``` # Instruct tokenizer The HuggingFace tokenizer included in this release should match our own. To compare: `pip install mistral-common` ```py from mistral_common.protocol.instruct.messages import ( AssistantMessage, UserMessage, ) from mistral_common.tokens.tokenizers.mistral import MistralTokenizer from mistral_common.tokens.instruct.normalize import ChatCompletionRequest from transformers import AutoTokenizer tokenizer_v3 = MistralTokenizer.v3() mistral_query = ChatCompletionRequest( messages=[ UserMessage(content="How many experts ?"), AssistantMessage(content="8"), UserMessage(content="How big ?"), AssistantMessage(content="22B"), UserMessage(content="Noice 🎉 !"), ], model="test", ) hf_messages = mistral_query.model_dump()['messages'] tokenized_mistral = tokenizer_v3.encode_chat_completion(mistral_query).tokens tokenizer_hf = AutoTokenizer.from_pretrained('mistralai/Mixtral-8x22B-Instruct-v0.1') tokenized_hf = tokenizer_hf.apply_chat_template(hf_messages, tokenize=True) assert tokenized_hf == tokenized_mistral ``` # Function calling and special tokens This tokenizer includes more special tokens, related to function calling : - [TOOL_CALLS] - [AVAILABLE_TOOLS] - [/AVAILABLE_TOOLS] - [TOOL_RESULT] - [/TOOL_RESULTS] If you want to use this model with function calling, please be sure to apply it similarly to what is done in our [SentencePieceTokenizerV3](https://github.com/mistralai/mistral-common/blob/main/src/mistral_common/tokens/tokenizers/sentencepiece.py#L299). # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall ---
{"language": ["fr", "en", "es", "it", "de"], "license": "apache-2.0", "tags": ["quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "16-bit", "GGUF", "mixtral", "moe"], "model_name": "Mixtral-8x22B-Instruct-v0.1-GGUF", "base_model": "mistralai/Mixtral-8x22B-Instruct-v0.1", "inference": false, "model_creator": "MaziyarPanahi", "pipeline_tag": "text-generation", "quantized_by": "MaziyarPanahi"}
MaziyarPanahi/Mixtral-8x22B-Instruct-v0.1-GGUF
null
[ "gguf", "quantized", "2-bit", "3-bit", "4-bit", "5-bit", "6-bit", "8-bit", "16-bit", "GGUF", "mixtral", "moe", "text-generation", "fr", "en", "es", "it", "de", "base_model:mistralai/Mixtral-8x22B-Instruct-v0.1", "license:apache-2.0", "region:us" ]
null
2024-04-17T17:29:25+00:00
[]
[ "fr", "en", "es", "it", "de" ]
TAGS #gguf #quantized #2-bit #3-bit #4-bit #5-bit #6-bit #8-bit #16-bit #GGUF #mixtral #moe #text-generation #fr #en #es #it #de #base_model-mistralai/Mixtral-8x22B-Instruct-v0.1 #license-apache-2.0 #region-us
# Mixtral-8x22B-Instruct-v0.1-GGUF The GGUF and quantized models here are based on mistralai/Mixtral-8x22B-Instruct-v0.1 model ## How to download You can download only the quants you need instead of cloning the entire repository as follows: ## Load sharded model 'llama_load_model_from_file' will detect the number of files and will load additional tensors from the rest of files. Original README --- # Model Card for Mixtral-8x22B-Instruct-v0.1 The Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1. ## Run the model # Instruct tokenizer The HuggingFace tokenizer included in this release should match our own. To compare: 'pip install mistral-common' # Function calling and special tokens This tokenizer includes more special tokens, related to function calling : - [TOOL_CALLS] - [AVAILABLE_TOOLS] - [/AVAILABLE_TOOLS] - [TOOL_RESULT] - [/TOOL_RESULTS] If you want to use this model with function calling, please be sure to apply it similarly to what is done in our SentencePieceTokenizerV3. # The Mistral AI Team Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Jean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen, Pierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, William El Sayed, William Marshall ---
[ "# Mixtral-8x22B-Instruct-v0.1-GGUF\n\nThe GGUF and quantized models here are based on mistralai/Mixtral-8x22B-Instruct-v0.1 model", "## How to download\nYou can download only the quants you need instead of cloning the entire repository as follows:", "## Load sharded model\n\n'llama_load_model_from_file' will detect the number of files and will load additional tensors from the rest of files.\n\n\n\n\nOriginal README\n---", "# Model Card for Mixtral-8x22B-Instruct-v0.1\nThe Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1.", "## Run the model", "# Instruct tokenizer\nThe HuggingFace tokenizer included in this release should match our own. To compare: \n'pip install mistral-common'", "# Function calling and special tokens\nThis tokenizer includes more special tokens, related to function calling : \n- [TOOL_CALLS]\n- [AVAILABLE_TOOLS]\n- [/AVAILABLE_TOOLS]\n- [TOOL_RESULT]\n- [/TOOL_RESULTS]\n\nIf you want to use this model with function calling, please be sure to apply it similarly to what is done in our SentencePieceTokenizerV3.", "# The Mistral AI Team\nAlbert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux,\nArthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,\nBlanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot,\nDiego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger,\nGianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona,\nJean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon,\nLucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat,\nMarie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen,\nPierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao,\nThibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang,\nValera Nemychnikova, William El Sayed, William Marshall\n\n---" ]
[ "TAGS\n#gguf #quantized #2-bit #3-bit #4-bit #5-bit #6-bit #8-bit #16-bit #GGUF #mixtral #moe #text-generation #fr #en #es #it #de #base_model-mistralai/Mixtral-8x22B-Instruct-v0.1 #license-apache-2.0 #region-us \n", "# Mixtral-8x22B-Instruct-v0.1-GGUF\n\nThe GGUF and quantized models here are based on mistralai/Mixtral-8x22B-Instruct-v0.1 model", "## How to download\nYou can download only the quants you need instead of cloning the entire repository as follows:", "## Load sharded model\n\n'llama_load_model_from_file' will detect the number of files and will load additional tensors from the rest of files.\n\n\n\n\nOriginal README\n---", "# Model Card for Mixtral-8x22B-Instruct-v0.1\nThe Mixtral-8x22B-Instruct-v0.1 Large Language Model (LLM) is an instruct fine-tuned version of the Mixtral-8x22B-v0.1.", "## Run the model", "# Instruct tokenizer\nThe HuggingFace tokenizer included in this release should match our own. To compare: \n'pip install mistral-common'", "# Function calling and special tokens\nThis tokenizer includes more special tokens, related to function calling : \n- [TOOL_CALLS]\n- [AVAILABLE_TOOLS]\n- [/AVAILABLE_TOOLS]\n- [TOOL_RESULT]\n- [/TOOL_RESULTS]\n\nIf you want to use this model with function calling, please be sure to apply it similarly to what is done in our SentencePieceTokenizerV3.", "# The Mistral AI Team\nAlbert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux,\nArthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault,\nBlanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot,\nDiego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger,\nGianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona,\nJean-Malo Delignon, Jia Li, Justus Murke, Louis Martin, Louis Ternon,\nLucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat,\nMarie Torelli, Marie-Anne Lachaux, Nicolas Schuhl, Patrick von Platen,\nPierre Stock, Sandeep Subramanian, Sophia Yang, Szymon Antoniak, Teven Le Scao,\nThibaut Lavril, Timothée Lacroix, Théophile Gervet, Thomas Wang,\nValera Nemychnikova, William El Sayed, William Marshall\n\n---" ]
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-2-7b-chat-hf_esnli_5000_2ep This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 0 - gradient_accumulation_steps: 32 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
{"license": "llama2", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "Llama-2-7b-chat-hf_esnli_5000_2ep", "results": []}]}
mohsenfayyaz/Llama-2-7b-chat-hf_esnli_5000_2ep
null
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-2-7b-chat-hf", "license:llama2", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2024-04-17T17:35:30+00:00
[]
[]
TAGS #transformers #safetensors #llama #text-generation #trl #sft #generated_from_trainer #conversational #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
# Llama-2-7b-chat-hf_esnli_5000_2ep This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 0 - gradient_accumulation_steps: 32 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.38.1 - Pytorch 2.2.1+cu121 - Datasets 2.17.1 - Tokenizers 0.15.2
[ "# Llama-2-7b-chat-hf_esnli_5000_2ep\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2", "### Training results", "### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2" ]
[ "TAGS\n#transformers #safetensors #llama #text-generation #trl #sft #generated_from_trainer #conversational #base_model-meta-llama/Llama-2-7b-chat-hf #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n", "# Llama-2-7b-chat-hf_esnli_5000_2ep\n\nThis model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset.", "## Model description\n\nMore information needed", "## Intended uses & limitations\n\nMore information needed", "## Training and evaluation data\n\nMore information needed", "## Training procedure", "### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1.5e-05\n- train_batch_size: 2\n- eval_batch_size: 8\n- seed: 0\n- gradient_accumulation_steps: 32\n- total_train_batch_size: 64\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 2", "### Training results", "### Framework versions\n\n- Transformers 4.38.1\n- Pytorch 2.2.1+cu121\n- Datasets 2.17.1\n- Tokenizers 0.15.2" ]